[jira] [Updated] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job

2014-10-14 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9992:
-
Status: Patch Available  (was: Open)

> Modify the NN loadGenerator to optionally run as a MapReduce job
> 
>
> Key: HADOOP-9992
> URL: https://issues.apache.org/jira/browse/HADOOP-9992
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akshay Radia
>Assignee: Akshay Radia
> Attachments: HADOOP-9992.004.patch, hadoop-9992-v2.patch, 
> hadoop-9992-v3.patch, hadoop-9992-v4.patch, hadoop-9992.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job

2014-10-14 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9992:
-
Attachment: hadoop-9992-v4.patch

> Modify the NN loadGenerator to optionally run as a MapReduce job
> 
>
> Key: HADOOP-9992
> URL: https://issues.apache.org/jira/browse/HADOOP-9992
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akshay Radia
>Assignee: Akshay Radia
> Attachments: HADOOP-9992.004.patch, hadoop-9992-v2.patch, 
> hadoop-9992-v3.patch, hadoop-9992-v4.patch, hadoop-9992.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job

2014-10-14 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9992:
-
Status: Open  (was: Patch Available)

> Modify the NN loadGenerator to optionally run as a MapReduce job
> 
>
> Key: HADOOP-9992
> URL: https://issues.apache.org/jira/browse/HADOOP-9992
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akshay Radia
>Assignee: Akshay Radia
> Attachments: HADOOP-9992.004.patch, hadoop-9992-v2.patch, 
> hadoop-9992-v3.patch, hadoop-9992-v4.patch, hadoop-9992.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11203) Allow ditscp to accept bandwitdh in fraction MegaBytes

2014-10-14 Thread Raju Bairishetti (JIRA)
Raju Bairishetti created HADOOP-11203:
-

 Summary: Allow ditscp to accept bandwitdh in fraction MegaBytes
 Key: HADOOP-11203
 URL: https://issues.apache.org/jira/browse/HADOOP-11203
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Reporter: Raju Bairishetti
Assignee: Raju Bairishetti


DistCp uses ThrottleInputStream, which  provides a bandwidth throttling on a 
specified stream. Currently, Distcp allows the max bandwidth value in Mega 
Bytes, which does not accept fractional values. It would be better if it 
accepts the Max Bandwitdh in fractional MegaBytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10597) Evaluate if we can have RPC client back off when server is under heavy load

2014-10-14 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171966#comment-14171966
 ] 

Ming Ma commented on HADOOP-10597:
--

Thanks, Chris.

1. The first two experiments belong to "user A is doing some bad things, 
measure user B's read latency.". The rest of the experiments are done under 
single user to measure the performance implication under different loads.
2. We can use client backoff without FCQ. But it is less interesting, given it 
could penalize good clients. That is because in the current implementation, the 
criteria RPC server uses to decide if it needs to ask client to back off is 
whether the RPC call queue is full. We can improve this criteria later if this 
isn't enough.
3. The experiment results are based on "client driven retry interval" policy. 
It means the server only asks the client to back off; RPC client will decide 
retry policy. In NN HA setup, that will be FailoverOnNetworkExceptionRetry 
which does exponential back off.
 

> Evaluate if we can have RPC client back off when server is under heavy load
> ---
>
> Key: HADOOP-10597
> URL: https://issues.apache.org/jira/browse/HADOOP-10597
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HADOOP-10597-2.patch, HADOOP-10597.patch, 
> MoreRPCClientBackoffEvaluation.pdf, RPCClientBackoffDesignAndEvaluation.pdf
>
>
> Currently if an application hits NN too hard, RPC requests be in blocking 
> state, assuming OS connection doesn't run out. Alternatively RPC or NN can 
> throw some well defined exception back to the client based on certain 
> policies when it is under heavy load; client will understand such exception 
> and do exponential back off, as another implementation of 
> RetryInvocationHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-10-14 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171965#comment-14171965
 ] 

Ray Chiang commented on HADOOP-10321:
-

RE: Javadoc warnings

This appears to be in the "known" category of warnings as mentioned in 
https://issues.apache.org/jira/browse/HADOOP-11082?focusedCommentId=14140193&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14140193

RE: Findbugs warnings

None of these belong to the modified file


> TestCompositeService should cover all enumerations of adding a service to a 
> parent service
> --
>
> Key: HADOOP-10321
> URL: https://issues.apache.org/jira/browse/HADOOP-10321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HADOOP-10321-02.patch, HADOOP-10321-03.patch, 
> HADOOP-10321-04.patch, HADOOP10321-01.patch
>
>
> HADOOP-10085 fixes some synchronization issues in 
> CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6857) FsShell should report raw disk usage including replication factor

2014-10-14 Thread Byron Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171831#comment-14171831
 ] 

Byron Wong commented on HADOOP-6857:


In the case when a directory /D and snapshot S are in the exact same state 
(e.g. a fresh snapshot has been made), everything works fine, meaning the sum 
of the disk consumed numbers reported by -du /D equals the disk consumed number 
reported by -du -s /D.
When /D and S start deviating (files getting renamed, deleted, etc.), the disk 
consumed calculation will  take the lastFileSize within the snapshots, find the 
maximum replication factor for that file within the snapshots, multiply the 2 
together, and increment disk consumed by that number, which inflates the total 
disk consumed calculation, so -du -s /D > the sum of numbers in -du /D.

I'd also like to point out that this implementation only takes replication 
factor of a file into account, even if that replication factor is greater than 
number of data nodes, which further inflates the -du calculation. For example, 
if we setrep 10 a file when we only have 3 datanodes, -du will still multiply 
fileLength * 10, and report that number.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
>Assignee: Byron Wong
> Attachments: HADOOP-6857.patch, show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171745#comment-14171745
 ] 

Hadoop QA commented on HADOOP-10321:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674873/HADOOP-10321-04.patch
  against trunk revision cdce883.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
24 warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4925//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4925//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4925//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4925//console

This message is automatically generated.

> TestCompositeService should cover all enumerations of adding a service to a 
> parent service
> --
>
> Key: HADOOP-10321
> URL: https://issues.apache.org/jira/browse/HADOOP-10321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HADOOP-10321-02.patch, HADOOP-10321-03.patch, 
> HADOOP-10321-04.patch, HADOOP10321-01.patch
>
>
> HADOOP-10085 fixes some synchronization issues in 
> CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11202) SequenceFile crashes with encrypted files that are shorter than FileSystem.getStatus(path)

2014-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11202:

Component/s: fs/s3

moved to hadoop common, tagged as an fs/s3. Corby, there's a new "s3a FS client 
that uses the AWS APIs directly" -could you try that to see if it behaves 
better?

> SequenceFile crashes with encrypted files that are shorter than 
> FileSystem.getStatus(path)
> --
>
> Key: HADOOP-11202
> URL: https://issues.apache.org/jira/browse/HADOOP-11202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.2.0
> Environment: Amazon EMR 3.0.4
>Reporter: Corby Wilson
>
> Encrypted files are often padded to allow for proper encryption on a 2^n-bit 
> boundary.  As a result, the encrypted file might be a few bytes bigger than 
> the unencrypted file.
> We have a case where an encrypted files is 2 bytes bigger due to padding.
> When we run a HIVE job on the file to get a record count (select count(*) 
> from ) it runs org.apache.hadoop.mapred.SequenceFileRecordReader and 
> loads the file in through a custom FS InputStream.
> The InputStream decrypts the file  as it gets read in.  Splits are properly 
> handled as it extends both Seekable and Positioned Readable.
> When the org.apache.hadoop.io.SequenceFile class intializes it reads in the 
> file size from the FileMetadata which returns the file size of the encrypted 
> file on disk (or in this case in S3).
> However, the actual file size is 2 bytes less, so the InputStream will return 
> EOF (-1) before the SequenceFile thinks it's done.
> As a result, the SequenceFile$Reader tried to run the next->readRecordLength 
> after the file has been closed and we get a crash.
> The SequenceFile class SHOULD, instead, pay attention to the EOF marker from 
> the stream instead of the file size reported in the metadata and set the 
> 'more' flag accordingly.
> Sample stack dump from crash
> 2014-10-10 21:25:27,160 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.io.IOException: 
> java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:433)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>   ... 11 more
> Caused by: java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:392)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.readRecordLength(SequenceFile.java:2332)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2363)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2500)
>   at 
> org.apache.hadoop.mapred.SequenceFileRec

[jira] [Moved] (HADOOP-11202) SequenceFile crashes with encrypted files that are shorter than FileSystem.getStatus(path)

2014-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved MAPREDUCE-6127 to HADOOP-11202:


Affects Version/s: (was: 2.2.0)
   2.2.0
  Key: HADOOP-11202  (was: MAPREDUCE-6127)
  Project: Hadoop Common  (was: Hadoop Map/Reduce)

> SequenceFile crashes with encrypted files that are shorter than 
> FileSystem.getStatus(path)
> --
>
> Key: HADOOP-11202
> URL: https://issues.apache.org/jira/browse/HADOOP-11202
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
> Environment: Amazon EMR 3.0.4
>Reporter: Corby Wilson
>
> Encrypted files are often padded to allow for proper encryption on a 2^n-bit 
> boundary.  As a result, the encrypted file might be a few bytes bigger than 
> the unencrypted file.
> We have a case where an encrypted files is 2 bytes bigger due to padding.
> When we run a HIVE job on the file to get a record count (select count(*) 
> from ) it runs org.apache.hadoop.mapred.SequenceFileRecordReader and 
> loads the file in through a custom FS InputStream.
> The InputStream decrypts the file  as it gets read in.  Splits are properly 
> handled as it extends both Seekable and Positioned Readable.
> When the org.apache.hadoop.io.SequenceFile class intializes it reads in the 
> file size from the FileMetadata which returns the file size of the encrypted 
> file on disk (or in this case in S3).
> However, the actual file size is 2 bytes less, so the InputStream will return 
> EOF (-1) before the SequenceFile thinks it's done.
> As a result, the SequenceFile$Reader tried to run the next->readRecordLength 
> after the file has been closed and we get a crash.
> The SequenceFile class SHOULD, instead, pay attention to the EOF marker from 
> the stream instead of the file size reported in the metadata and set the 
> 'more' flag accordingly.
> Sample stack dump from crash
> 2014-10-10 21:25:27,160 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.io.IOException: 
> java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:433)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>   ... 11 more
> Caused by: java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:392)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.readRecordLength(SequenceFile.java:2332)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2363)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2500)
>   at 
> org.apache.hadoop.mapred.Sequence

[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-10-14 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10321:

Attachment: HADOOP-10321-04.patch

Submitting duplicate patch against latest trunk.  Javadoc warnings for 
Junit/@Test/timeout are apparently well known.

> TestCompositeService should cover all enumerations of adding a service to a 
> parent service
> --
>
> Key: HADOOP-10321
> URL: https://issues.apache.org/jira/browse/HADOOP-10321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HADOOP-10321-02.patch, HADOOP-10321-03.patch, 
> HADOOP-10321-04.patch, HADOOP10321-01.patch
>
>
> HADOOP-10085 fixes some synchronization issues in 
> CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10321) TestCompositeService should cover all enumerations of adding a service to a parent service

2014-10-14 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-10321:

Status: Patch Available  (was: Open)

Submitting for testing.

> TestCompositeService should cover all enumerations of adding a service to a 
> parent service
> --
>
> Key: HADOOP-10321
> URL: https://issues.apache.org/jira/browse/HADOOP-10321
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Ray Chiang
>  Labels: supportability, test
> Attachments: HADOOP-10321-02.patch, HADOOP-10321-03.patch, 
> HADOOP-10321-04.patch, HADOOP10321-01.patch
>
>
> HADOOP-10085 fixes some synchronization issues in 
> CompositeService#addService(). The tests should cover all cases. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11199) Configuration should be able to set empty value for property

2014-10-14 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171685#comment-14171685
 ] 

Ray Chiang commented on HADOOP-11199:
-

I believe it's intentional.  Otherwise, it would be impossible to document 
properties in the .xml file without accidentally setting a value.  Also, lots 
of code would have to be added to check for null/empty values (spaces have 
wreaked havoc with property values before).

I made a change in YARN-2284 (patch submitted, but not committed) that makes 
allowing empty values configurable (for unit testing).

> Configuration should be able to set empty value for property
> 
>
> Key: HADOOP-11199
> URL: https://issues.apache.org/jira/browse/HADOOP-11199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Wangda Tan
>
> Currently in hadoop.common.conf.Configuration, when you specify a XML like 
> this:
> {code}
> 
>   
> conf.name
> 
>   
> 
> {code}
> When you trying to get the conf.name, the returned value is null instead of 
> an empty string.
> Test code for this,
> {code}
> import java.io.ByteArrayInputStream;
> import org.apache.hadoop.conf.Configuration;
> public class HadoopConfigurationEmptyTest {
>   public static void main(String[] args) {
> Configuration conf = new Configuration(false);
> ByteArrayInputStream bais =
> new ByteArrayInputStream((""
> + "conf.name" + ""
> + "").getBytes());
> conf.addResource(bais);
> System.out.println(conf.get("conf.name"));
>   }
> }
> {code}
> Does this intentionally or a behavior should be fixed?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171353#comment-14171353
 ] 

Hudson commented on HADOOP-11181:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6260 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6260/])
HADOOP-11181. Generalized o.a.h.s.t.d.DelegationTokenManager to handle all 
sub-classes of AbstractDelegationTokenIdentifier. Contributed by Zhijie Shen. 
(zjshen: rev cdce88376a60918dfe2f3bcd82a7666d74992a19)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenManager.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.6.0
>
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11181:
-
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

Commit the patch to trunk, branch-2 and branch-2.6. Thanks for review Jing!

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.6.0
>
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11182) GraphiteSink emits wrong timestamps

2014-10-14 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171347#comment-14171347
 ] 

Ravi Prakash commented on HADOOP-11182:
---

Hi Sascha! Thanks a lot for all your effort! 
That -1 release audit warning is probably because you haven't included the 
Apache license at the top of the new file you have introduced 
(TestGraphiteSink.java). By the way, there already was a file 
(TestGraphiteMetrics.java) . Could the test not have been added to that? In 
fact you are right, this patch was probably small enough that fixing the test 
which failed (TestMetricsSystemImpl) in the earlier patch would have been 
enough.
You are right about the 404. As a workaround you can check the console output 
or run test-patch yourself. The findbugs warning seems to be coming from a file 
not in this patch so we shouldn't have to worry about it.

I can't +1 a patch that I have uploaded. 

> GraphiteSink emits wrong timestamps
> ---
>
> Key: HADOOP-11182
> URL: https://issues.apache.org/jira/browse/HADOOP-11182
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0, 2.5.1
>Reporter: Sascha Coenen
> Attachments: HADOOP-11182-GraphiteSink-v1.patch, HADOOP-11182-v2.patch
>
>
> the org.apache.hadoop.metrics2.sink.GraphiteSink class emits metrics at the 
> configured time period, but the timestamps written only change every 128 
> seconds, even it the configured time period in the configuration file is much 
> shorter.
> This is due to a bug in line 93:
> {code:java}
> 092// Round the timestamp to second as Graphite accepts it in 
> such format.
> 093int timestamp = Math.round(record.timestamp() / 1000.0f);
> {code}
> The timestamp property is a long and is divided by a float which yields a 
> result that is not precise enough and yields same valued results for 
> timestamps that lie up to 128 seconds apart. Also, the result is then written 
> into an int variable.
> One solution would be to divide by 1000.0d, but the best fix would be to not 
> even convert to a decimal format in the first place. Instead one could 
> replace the line with the following:
> {code:java}
>long timestamp = record.timestamp() / 1000L;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11201:
---
Priority: Blocker  (was: Major)

> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HADOOP-11201.v01.patch
>
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har 
> -p /tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/fileStar.har/dir2
> {code}
> Archiving dir3 (directory) which is equivalent to the above works.
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
> /tmp/harsrc 'dir2/dir3' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
> 14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/dir3.har/dir2
> drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file35
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11181:
-
Hadoop Flags: Reviewed

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171329#comment-14171329
 ] 

Zhijie Shen commented on HADOOP-11181:
--

Thanks for the review, [~jingzhao]. Will commit the patch.

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171294#comment-14171294
 ] 

Jing Zhao commented on HADOOP-11181:


+1 for the latest patch. Thanks for working on this, Zhijie!

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171283#comment-14171283
 ] 

Zhijie Shen commented on HADOOP-11181:
--

HADOOP-11122 is trying to fix the synchronization findbug of 
Abstract|ZKDelegationTokenSecretManager, the one of the IS2 warning should be 
gone after it, and the other one is not related to this patch as well.

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11181) o.a.h.security.token.delegation.DelegationTokenManager should be more generalized to handle other DelegationTokenIdentifier

2014-10-14 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11181:
-
Target Version/s: 2.6.0

> o.a.h.security.token.delegation.DelegationTokenManager should be more 
> generalized to handle other DelegationTokenIdentifier
> ---
>
> Key: HADOOP-11181
> URL: https://issues.apache.org/jira/browse/HADOOP-11181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Attachments: HADOOP-11181.1.patch, HADOOP-11181.2.patch, 
> HADOOP-11181.3.patch, HADOOP-11181.4.patch, HADOOP-11181.5.patch
>
>
> While DelegationTokenManager can set external secretManager, it have the 
> assumption that the token is going to be 
> o.a.h.security.token.delegation.DelegationTokenIdentifier, and use 
> DelegationTokenIdentifier method to decode a token. 
> {code}
>   @SuppressWarnings("unchecked")
>   public UserGroupInformation verifyToken(Token
>   token) throws IOException {
> ByteArrayInputStream buf = new 
> ByteArrayInputStream(token.getIdentifier());
> DataInputStream dis = new DataInputStream(buf);
> DelegationTokenIdentifier id = new DelegationTokenIdentifier(tokenKind);
> id.readFields(dis);
> dis.close();
> secretManager.verifyToken(id, token.getPassword());
> return id.getUser();
>   }
> {code}
> It's not going to work it the token kind is other than 
> web.DelegationTokenIdentifier. For example, RM want to reuse it but hook it 
> to RMDelegationTokenSecretManager and RMDelegationTokenIdentifier, which has 
> the customized way to decode a token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11195) Move Id-Name mapping in NFS to the hadoop-common area for better maintenance

2014-10-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171246#comment-14171246
 ] 

Yongjun Zhang commented on HADOOP-11195:


Hi [~brandonli],

Thanks for looking into! I guess you meant the IdMappingConstant.java? I 
on-purposely left with the same names as in original nfs code for backward 
compatibility. e.g., the following properties all have "nfs" signature, 
changing them would break compatibility:
{code}
  public final static String NFS_USERGROUP_UPDATE_MILLIS_KEY = 
"nfs.usergroup.update.millis";
  // Used for finding the configured static mapping file.
  public static final String NFS_STATIC_MAPPING_FILE_KEY = 
"nfs.static.mapping.file";
  public static final String NFS_STATIC_MAPPING_FILE_DEFAULT = "/etc/nfs.map";
{code}

I should be able to drop the NFS in the following
{code}
public final static long NFS_USERGROUP_UPDATE_MILLIS_DEFAULT = 15 * 60 * 1000; 
// ms
public final static long NFS_USERGROUP_UPDATE_MILLIS_MIN = 1 * 60 * 1000; // ms
{code}

In future work, we can consider deprecate the property names and introduce new 
ones, but I think it'd be nice for the initial merge work for HADOOP-11195 to 
maintain backward compatibility.

Is this what you meant?  Thanks.




> Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
> 
>
> Key: HADOOP-11195
> URL: https://issues.apache.org/jira/browse/HADOOP-11195
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11195.001.patch
>
>
> Per [~aw]'s suggestion in HDFS-7146, creating this jira to move the id-name 
> mapping implementation (IdUserGroup.java) to the framework that cache user 
> and group info in hadoop-common area 
> (hadoop-common/src/main/java/org/apache/hadoop/security) 
> Thanks [~brandonli] and [~aw] for the review and discussion in HDFS-7146.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11176) KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171046#comment-14171046
 ] 

Hudson commented on HADOOP-11176:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1926 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1926/])
HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh. (atm: rev 
0e57aa3bf689374736939300d8f3525ec38bead7)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> KMSClientProvider authentication fails when both currentUgi and loginUgi are 
> a proxied user
> ---
>
> Key: HADOOP-11176
> URL: https://issues.apache.org/jira/browse/HADOOP-11176
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: encryption
> Fix For: 2.6.0
>
> Attachments: HADOOP-11176.1.patch, HADOOP-11176.2.patch, 
> HADOOP-11176.3.patch
>
>
> In a secure environment, with kerberos, when the KMSClientProvider instance 
> is created in the context of a proxied user, The initial SPNEGO handshake is 
> made with the currentUser (the proxied user) as the Principal.. this will 
> fail, since the proxied user is not logged in.
> The handshake must be done using the real user.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11198) Fix typo in javadoc for FileSystem#listStatus()

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171045#comment-14171045
 ] 

Hudson commented on HADOOP-11198:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1926 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1926/])
HADOOP-11198. Fix typo in javadoc for FileSystem#listStatus(). Contributed by 
Li Lu. (wheat9: rev 5faaba0bd09db4ddcf5c1824ad7abb18b1489bbb)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java


> Fix typo in javadoc for FileSystem#listStatus()
> ---
>
> Key: HADOOP-11198
> URL: https://issues.apache.org/jira/browse/HADOOP-11198
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Li Lu
>Priority: Minor
>  Labels: newbie
> Fix For: 2.6.0
>
> Attachments: HADOOP-11198-101314.patch
>
>
> {code}
>* @return the statuses of the files/directories in the given patch
> {code}
> 'patch' should be path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11198) Fix typo in javadoc for FileSystem#listStatus()

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170965#comment-14170965
 ] 

Hudson commented on HADOOP-11198:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/])
HADOOP-11198. Fix typo in javadoc for FileSystem#listStatus(). Contributed by 
Li Lu. (wheat9: rev 5faaba0bd09db4ddcf5c1824ad7abb18b1489bbb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix typo in javadoc for FileSystem#listStatus()
> ---
>
> Key: HADOOP-11198
> URL: https://issues.apache.org/jira/browse/HADOOP-11198
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Li Lu
>Priority: Minor
>  Labels: newbie
> Fix For: 2.6.0
>
> Attachments: HADOOP-11198-101314.patch
>
>
> {code}
>* @return the statuses of the files/directories in the given patch
> {code}
> 'patch' should be path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11176) KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170966#comment-14170966
 ] 

Hudson commented on HADOOP-11176:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/])
HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh. (atm: rev 
0e57aa3bf689374736939300d8f3525ec38bead7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> KMSClientProvider authentication fails when both currentUgi and loginUgi are 
> a proxied user
> ---
>
> Key: HADOOP-11176
> URL: https://issues.apache.org/jira/browse/HADOOP-11176
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: encryption
> Fix For: 2.6.0
>
> Attachments: HADOOP-11176.1.patch, HADOOP-11176.2.patch, 
> HADOOP-11176.3.patch
>
>
> In a secure environment, with kerberos, when the KMSClientProvider instance 
> is created in the context of a proxied user, The initial SPNEGO handshake is 
> made with the currentUser (the proxied user) as the Principal.. this will 
> fail, since the proxied user is not logged in.
> The handshake must be done using the real user.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170905#comment-14170905
 ] 

Ayappan commented on HADOOP-10846:
--

This patch resolves checksum errors in the existing tests. So no new tests are 
needed for this patch.
Findbugs warnings are not related to this patch.



> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170881#comment-14170881
 ] 

Hadoop QA commented on HADOOP-10846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674751/HADOOP-10846-v2.patch
  against trunk revision 5faaba0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4924//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4924//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4924//console

This message is automatically generated.

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170842#comment-14170842
 ] 

Ayappan commented on HADOOP-10846:
--

sorry. The new attached patch is "HADOOP-10846-v2.patch"

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170838#comment-14170838
 ] 

Ayappan commented on HADOOP-10846:
--

The last patch contains some extra spaces at the line ends due to which it 
fails.
I updated a new fixed patch "HADOOP-10846-v3.patch"

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HADOOP-10846:
-
Attachment: HADOOP-10846-v2.patch

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11200) HttpFS proxyuser, doAs param is case sensitive

2014-10-14 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HADOOP-11200.
--
Resolution: Duplicate

Didn't realize that HADOOP-11083 addressed this for HttpFS. Closing as a 
duplicate.

> HttpFS proxyuser, doAs param is case sensitive
> --
>
> Key: HADOOP-11200
> URL: https://issues.apache.org/jira/browse/HADOOP-11200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Larry McCay
> Fix For: 2.6.0
>
>
> It appears that the doAs processing in HttpFS for proxyusers is case 
> sensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11176) KMSClientProvider authentication fails when both currentUgi and loginUgi are a proxied user

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170820#comment-14170820
 ] 

Hudson commented on HADOOP-11176:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #711 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/711/])
HADOOP-11176. KMSClientProvider authentication fails when both currentUgi and 
loginUgi are a proxied user. Contributed by Arun Suresh. (atm: rev 
0e57aa3bf689374736939300d8f3525ec38bead7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> KMSClientProvider authentication fails when both currentUgi and loginUgi are 
> a proxied user
> ---
>
> Key: HADOOP-11176
> URL: https://issues.apache.org/jira/browse/HADOOP-11176
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: encryption
> Fix For: 2.6.0
>
> Attachments: HADOOP-11176.1.patch, HADOOP-11176.2.patch, 
> HADOOP-11176.3.patch
>
>
> In a secure environment, with kerberos, when the KMSClientProvider instance 
> is created in the context of a proxied user, The initial SPNEGO handshake is 
> made with the currentUser (the proxied user) as the Principal.. this will 
> fail, since the proxied user is not logged in.
> The handshake must be done using the real user.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11198) Fix typo in javadoc for FileSystem#listStatus()

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170819#comment-14170819
 ] 

Hudson commented on HADOOP-11198:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #711 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/711/])
HADOOP-11198. Fix typo in javadoc for FileSystem#listStatus(). Contributed by 
Li Lu. (wheat9: rev 5faaba0bd09db4ddcf5c1824ad7abb18b1489bbb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix typo in javadoc for FileSystem#listStatus()
> ---
>
> Key: HADOOP-11198
> URL: https://issues.apache.org/jira/browse/HADOOP-11198
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Li Lu
>Priority: Minor
>  Labels: newbie
> Fix For: 2.6.0
>
> Attachments: HADOOP-11198-101314.patch
>
>
> {code}
>* @return the statuses of the files/directories in the given patch
> {code}
> 'patch' should be path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170729#comment-14170729
 ] 

Hadoop QA commented on HADOOP-10846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674735/HADOOP-10846-v1.patch
  against trunk revision 5faaba0.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4923//console

This message is automatically generated.

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170730#comment-14170730
 ] 

Hadoop QA commented on HADOOP-11201:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674733/HADOOP-11201.v01.patch
  against trunk revision 5faaba0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-archives.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4922//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4922//console

This message is automatically generated.

> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-11201.v01.patch
>
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har 
> -p /tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/fileStar.har/dir2
> {code}
> Archiving dir3 (directory) which is equivalent to the above works.
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
> /tmp/harsrc 'dir2/dir3' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
> 14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/dir3.har/dir2
> drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file35
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HADOOP-10846:
-
Attachment: HADOOP-10846-v1.patch

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2014-10-14 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170724#comment-14170724
 ] 

Ayappan commented on HADOOP-10846:
--

Lot of changes went into hadoop since the patch was attached. So the old patch 
no longer seem to be correct. 
I reworked the patch and attached a new patch named HADOOP-10846-v1.patch.

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.4.1
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11201:
---
Attachment: HADOOP-11201.v01.patch

v01 with unit tests demonstrating the problem and a proposed fix.

> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-11201.v01.patch
>
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har 
> -p /tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/fileStar.har/dir2
> {code}
> Archiving dir3 (directory) which is equivalent to the above works.
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
> /tmp/harsrc 'dir2/dir3' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
> 14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/dir3.har/dir2
> drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file35
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11201:
---
Status: Patch Available  (was: Open)

> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-11201.v01.patch
>
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har 
> -p /tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/fileStar.har/dir2
> {code}
> Archiving dir3 (directory) which is equivalent to the above works.
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
> /tmp/harsrc 'dir2/dir3' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
> 14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/dir3.har/dir2
> drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file35
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11201:
---
Description: 
Consider the following scenario:
{code}
$ hadoop fs -ls /tmp/harsrc/dir2/dir3
Found 5 items
-rw-r--r--   1 blah blah  0 2014-10-13 20:59 
/tmp/harsrc/dir2/dir3/file31
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file32
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file33
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file34
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file35
{code}

Archive 'dir3/file3*':
{code}
$ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har -p 
/tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
$ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
har:///tmp/hardst_local/fileStar.har/dir2
{code}

Archiving dir3 (directory) which is equivalent to the above works.
{code}
$ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
/tmp/harsrc 'dir2/dir3' /tmp/hardst_local
$ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
har:///tmp/hardst_local/dir3.har/dir2
drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
har:///tmp/hardst_local/dir3.har/dir2/dir3
-rw-r--r--   1 blah blah  0 2014-10-13 20:59 
har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
har:///tmp/hardst_local/dir3.har/dir2/dir3/file34
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
har:///tmp/hardst_local/dir3.har/dir2/dir3/file35
{code}




  was:
Consider the following scenario:
{code}
$ hadoop fs -ls /tmp/harsrc/dir2/dir3
Found 5 items
-rw-r--r--   1 blah blah  0 2014-10-13 20:59 
/tmp/harsrc/dir2/dir3/file31
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file32
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file33
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file34
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file35
{code}

Archive 'dir3/file3*':



> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName fileStar.har 
> -p /tmp/harsrc 'dir2/dir3/file*' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/fileStar.har
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/fileStar.har/dir2
> {code}
> Archiving dir3 (directory) which is equivalent to the above works.
> {code}
> $ hadoop archive -Dmapreduce.framework.name=local -archiveName dir3.har -p 
> /tmp/harsrc 'dir2/dir3' /tmp/hardst_local
> $ hadoop fs -ls -R har:/tmp/hardst_local/dir3.har
> 14/10/14 02:06:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> drwxr-xr-x   - blah blah  0 2014-10-13 22:32 
> har:///tmp/hardst_local/dir3.har/dir2
> drwxr-xr-x   - blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> har:///tmp/hardst_local/dir3.har/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 

[jira] [Updated] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11201:
---
Description: 
Consider the following scenario:
{code}
$ hadoop fs -ls /tmp/harsrc/dir2/dir3
Found 5 items
-rw-r--r--   1 blah blah  0 2014-10-13 20:59 
/tmp/harsrc/dir2/dir3/file31
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file32
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file33
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file34
-rw-r--r--   1 blah blah  0 2014-10-14 01:51 
/tmp/harsrc/dir2/dir3/file35
{code}

Archive 'dir3/file3*':


> Hadoop Archives should support globs resolving to files
> ---
>
> Key: HADOOP-11201
> URL: https://issues.apache.org/jira/browse/HADOOP-11201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.0.5-alpha
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>
> Consider the following scenario:
> {code}
> $ hadoop fs -ls /tmp/harsrc/dir2/dir3
> Found 5 items
> -rw-r--r--   1 blah blah  0 2014-10-13 20:59 
> /tmp/harsrc/dir2/dir3/file31
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file32
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file33
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file34
> -rw-r--r--   1 blah blah  0 2014-10-14 01:51 
> /tmp/harsrc/dir2/dir3/file35
> {code}
> Archive 'dir3/file3*':



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11201) Hadoop Archives should support globs resolving to files

2014-10-14 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-11201:
--

 Summary: Hadoop Archives should support globs resolving to files
 Key: HADOOP-11201
 URL: https://issues.apache.org/jira/browse/HADOOP-11201
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.5-alpha
Reporter: Gera Shegalov
Assignee: Gera Shegalov






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)