[jira] [Created] (HADOOP-8612) Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)

2012-07-23 Thread Matt Foley (JIRA)
Matt Foley created HADOOP-8612:
--

 Summary: Backport HADOOP-8599 to branch-1 (Non empty response when 
read beyond eof)
 Key: HADOOP-8612
 URL: https://issues.apache.org/jira/browse/HADOOP-8612
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Matt Foley


When FileSystem.getFileBlockLocations(file,start,len) is called with start 
argument equal to the file size, the response is not empty. See HADOOP-8599 for 
details and tiny patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8599) Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file

2012-07-23 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8599:
---

Target Version/s: 0.23.3, 3.0.0, 2.2.0-alpha  (was: 1.1.0, 0.23.2, 
2.0.1-alpha)

Fixed Target Versions to be consistent with Fix Versions.
Removed 1.1.0 target and opened HADOOP-8612 for backport.

 Non empty response from FileSystem.getFileBlockLocations when asking for data 
 beyond the end of file 
 -

 Key: HADOOP-8599
 URL: https://issues.apache.org/jira/browse/HADOOP-8599
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8859-branch-0.23.patch


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. There is a test 
 TestGetFileBlockLocations.testGetFileBlockLocations2 which uses randomly 
 generated start and len arguments when calling 
 FileSystem.getFileBlockLocations and the test fails randomly (when the 
 generated start value equals to the file size).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7730) Allow TestCLI to be run against a cluster

2012-07-23 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7730:
---

Target Version/s: 1.2.0, 0.22.1  (was: 1.1.0, 0.22.0)

Please propose an applicable patch and get it committed to branch-1.
Thank you.

 Allow TestCLI to be run against a cluster
 -

 Key: HADOOP-7730
 URL: https://issues.apache.org/jira/browse/HADOOP-7730
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 0.20.205.0, 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.1

 Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
 HADOOP-7730.trunk.patch


 Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-07-23 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420808#comment-13420808
 ] 

Daryn Sharp commented on HADOOP-7967:
-

Would you please explain further?  I'm not we should tie whether a fs has a 
token to whether it has children.  That assumption may be true for viewfs, but 
isn't a requirement.

bq.  FileSystems which embed others such as ViewFS will only need to implement 
getChildFileSystems()
I think the proposals run counter to that goal.  The methods 
{{getCanonicalService}} and {{getDelegation}} are the simple atoms used by the 
implementation of {{addDelegationTokens}} that an fs either does or does not 
implement.

bq. the default impl of getCanaonicalName will return null if 
getChildFileSystems() returns null
What would be the behavior when {{getChildFileSystems}} _does not_ return null? 
 Note that viewfs has to override it to always return null instead of the uri's 
authority (mount table in this case).  If the default is changed to always 
return null, which I can do, it becomes another incompatible change...

bq.  the defaul impl of getDelegationToken will return null if 
getChildFileSystems() returns null
What would be the behavior when {{getChildFileSystems}} _does not_ return null? 
 The default is/was return null and the fs overrides only if it has a token.

The logic is currently: see if the fs itself has a token, yes, get it.  
Irrespective, see if the fs has children, yes, repeat the process for each 
child.

{code}
collectDelegationTokens() {
  
  if (getCanonicalService() != null) { // get a token for myself
getDelegationToken(...);
  }
  foreach childFs : getChildFileSystems() {  // ask each of my children for a 
token
childFs.collectDelegationTokens(...);
  }
}
{code}

It has to be recursive to allow for arbitrarily stacked filesystems.  Only 
allowing the top-level fs to have multiple children will not allow mergefs to 
work.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.patch, 
 HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7761) Improve performance of raw comparisons

2012-07-23 Thread Scott Carey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420818#comment-13420818
 ] 

Scott Carey commented on HADOOP-7761:
-

{quote} this is much slower {quote}
It appears that I exaggerated.  This is somewhat slower for small byte arrays 
and somewhat faster for larger ones on a 64 bit JVM in AVRO-939.  There is more 
work to do to understand this and possibly improve all of it.

 Improve performance of raw comparisons
 --

 Key: HADOOP-7761
 URL: https://issues.apache.org/jira/browse/HADOOP-7761
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io, performance, util
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.1

 Attachments: hadoop-7761.txt, hadoop-7761.txt, hadoop-7761.txt, 
 hadoop-7761.txt


 Guava has a nice implementation of lexicographical byte-array comparison that 
 uses sun.misc.Unsafe to compare unsigned byte arrays long-at-a-time. Their 
 benchmarks show it as being 2x more CPU-efficient than the equivalent 
 pure-Java implementation. We can easily integrate this into 
 WritableComparator.compareBytes to improve CPU performance in the shuffle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8613) AbstractDelegationTokenIdentifier#getUser() should set token auth type

2012-07-23 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8613:
---

 Summary: AbstractDelegationTokenIdentifier#getUser() should set 
token auth type
 Key: HADOOP-8613
 URL: https://issues.apache.org/jira/browse/HADOOP-8613
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 0.23.0, 1.0.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


{{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated with 
a token.  The UGI's auth type will either be SIMPLE for non-proxy tokens, or 
PROXY (effective user) and SIMPLE (real user).  Instead of SIMPLE, it needs to 
be TOKEN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8551) fs -mkdir creates parent directories without the -p option

2012-07-23 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8551:


Attachment: HADOOP-8551.patch

Attaching a new patch with the fix. Also added a new test to verify mkdir a/b/ 
works when dir 'a' exists.

 fs -mkdir creates parent directories without the -p option
 --

 Key: HADOOP-8551
 URL: https://issues.apache.org/jira/browse/HADOOP-8551
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Daryn Sharp
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch


 hadoop fs -mkdir foo/bar will work even if bar is not present.  It should 
 only work if -p is given and foo is not present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8435) Propdel all svn:mergeinfo

2012-07-23 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420849#comment-13420849
 ] 

Eli Collins commented on HADOOP-8435:
-

Won't this nuke the svn md for the 1623 and 3042 merges?

hadoop-trunk1 $ svn propget svn:mergeinfo .
/hadoop/common/branches/HDFS-1623:1152502-1296519
/hadoop/common/branches/HDFS-3042:1306184-1342109


 Propdel all svn:mergeinfo
 -

 Key: HADOOP-8435
 URL: https://issues.apache.org/jira/browse/HADOOP-8435
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Harsh J
Assignee: Harsh J

 TortoiseSVN/some versions of svn have added several mergeinfo props to 
 Hadoop's svn files/dirs (list below).
 We should propdel that unneeded property, and fix it up. This otherwise 
 causes pain to those who backport with a simple root-dir-down command (svn 
 merge -c num url/path).
 We should also make sure to update the HowToCommit page on advising to avoid 
 mergeinfo additions to prevent this from reoccurring.
 Files affected are, from my propdel revert output earlier today:
 {code}
 Reverted '.'
 Reverted 'hadoop-hdfs-project'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/java'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/native'
 Reverted 'hadoop-mapreduce-project'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt'
 Reverted 'hadoop-mapreduce-project/conf'
 Reverted 'hadoop-mapreduce-project/CHANGES.txt'
 Reverted 'hadoop-mapreduce-project/src/test/mapred'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc'
 Reverted 'hadoop-mapreduce-project/src/contrib'
 Reverted 'hadoop-mapreduce-project/src/contrib/eclipse-plugin'
 Reverted 'hadoop-mapreduce-project/src/contrib/block_forensics'
 Reverted 'hadoop-mapreduce-project/src/contrib/index'
 Reverted 'hadoop-mapreduce-project/src/contrib/data_join'
 Reverted 'hadoop-mapreduce-project/src/contrib/build-contrib.xml'
 Reverted 'hadoop-mapreduce-project/src/contrib/vaidya'
 Reverted 'hadoop-mapreduce-project/src/contrib/build.xml'
 Reverted 'hadoop-mapreduce-project/src/java'
 Reverted 'hadoop-mapreduce-project/src/webapps/job'
 Reverted 'hadoop-mapreduce-project/src/c++'
 Reverted 'hadoop-mapreduce-project/src/examples'
 Reverted 'hadoop-mapreduce-project/hadoop-mapreduce-examples'
 Reverted 
 'hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml'
 Reverted 'hadoop-mapreduce-project/bin'
 Reverted 'hadoop-common-project'
 Reverted 'hadoop-common-project/hadoop-common'
 Reverted 'hadoop-common-project/hadoop-common/src/test/core'
 Reverted 'hadoop-common-project/hadoop-common/src/main/java'
 Reverted 'hadoop-common-project/hadoop-common/src/main/docs'
 Reverted 'hadoop-common-project/hadoop-auth'
 Reverted 'hadoop-project'
 Reverted 'hadoop-project/src/site'
 {code}
 Proposed set of fix (from 
 http://stackoverflow.com/questions/767418/remove-unnecessary-svnmergeinfo-properties):
 {code}
 svn propdel svn:mergeinfo -R
 svn revert .
 svn commit -m appropriate message
 {code}
 (To be done on branch-2 and trunk both)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-07-23 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420853#comment-13420853
 ] 

Sanjay Radia commented on HADOOP-7967:
--

@daryn .. I'm not we should tie whether a fs has a token to whether it has 
children. ..
Only as a default impl. The methods can be overrided when needed. In your 
patch, ViewFileSystem returns null for getCanonicalName and getDelegationToken. 
This will be true for very many file systems that are built using other file 
systems (but one can override).

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.patch, 
 HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-07-23 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8614:


 Summary: IOUtils#skipFully hangs forever on EOF
 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


IOUtils#skipFully contains this code:

{code}
  public static void skipFully(InputStream in, long len) throws IOException {
while (len  0) {
  long ret = in.skip(len);
  if (ret  0) {
throw new IOException( Premature EOF from inputStream);
  }
  len -= ret;
}
  }
{code}

The Java documentation is silent about what exactly skip is supposed to do in 
the event of EOF.  However, I looked at both InputStream#skip and 
ByteArrayInputStream#skip, and they both simply return 0 on EOF (no exception). 
 So it seems safe to assume that this is the standard Java way of doing things 
in an InputStream.

Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-07-23 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420860#comment-13420860
 ] 

Sanjay Radia commented on HADOOP-7967:
--

@sanjay the defaul impl of getDelegationToken will return null if 
getChildFileSystems() returns null  ...
Sorry I misspoke, the default impl for getDelegationToken should always be null 
(as is today).

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.patch, 
 HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8435) Propdel all svn:mergeinfo

2012-07-23 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420863#comment-13420863
 ] 

Harsh J commented on HADOOP-8435:
-

Thanks Eli. I am ignoring {{.}} from my prop-del per my commands above (I 
revert . back to its state), will that not cover this?

 Propdel all svn:mergeinfo
 -

 Key: HADOOP-8435
 URL: https://issues.apache.org/jira/browse/HADOOP-8435
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Harsh J
Assignee: Harsh J

 TortoiseSVN/some versions of svn have added several mergeinfo props to 
 Hadoop's svn files/dirs (list below).
 We should propdel that unneeded property, and fix it up. This otherwise 
 causes pain to those who backport with a simple root-dir-down command (svn 
 merge -c num url/path).
 We should also make sure to update the HowToCommit page on advising to avoid 
 mergeinfo additions to prevent this from reoccurring.
 Files affected are, from my propdel revert output earlier today:
 {code}
 Reverted '.'
 Reverted 'hadoop-hdfs-project'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/java'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/native'
 Reverted 'hadoop-mapreduce-project'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt'
 Reverted 'hadoop-mapreduce-project/conf'
 Reverted 'hadoop-mapreduce-project/CHANGES.txt'
 Reverted 'hadoop-mapreduce-project/src/test/mapred'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc'
 Reverted 'hadoop-mapreduce-project/src/contrib'
 Reverted 'hadoop-mapreduce-project/src/contrib/eclipse-plugin'
 Reverted 'hadoop-mapreduce-project/src/contrib/block_forensics'
 Reverted 'hadoop-mapreduce-project/src/contrib/index'
 Reverted 'hadoop-mapreduce-project/src/contrib/data_join'
 Reverted 'hadoop-mapreduce-project/src/contrib/build-contrib.xml'
 Reverted 'hadoop-mapreduce-project/src/contrib/vaidya'
 Reverted 'hadoop-mapreduce-project/src/contrib/build.xml'
 Reverted 'hadoop-mapreduce-project/src/java'
 Reverted 'hadoop-mapreduce-project/src/webapps/job'
 Reverted 'hadoop-mapreduce-project/src/c++'
 Reverted 'hadoop-mapreduce-project/src/examples'
 Reverted 'hadoop-mapreduce-project/hadoop-mapreduce-examples'
 Reverted 
 'hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml'
 Reverted 'hadoop-mapreduce-project/bin'
 Reverted 'hadoop-common-project'
 Reverted 'hadoop-common-project/hadoop-common'
 Reverted 'hadoop-common-project/hadoop-common/src/test/core'
 Reverted 'hadoop-common-project/hadoop-common/src/main/java'
 Reverted 'hadoop-common-project/hadoop-common/src/main/docs'
 Reverted 'hadoop-common-project/hadoop-auth'
 Reverted 'hadoop-project'
 Reverted 'hadoop-project/src/site'
 {code}
 Proposed set of fix (from 
 http://stackoverflow.com/questions/767418/remove-unnecessary-svnmergeinfo-properties):
 {code}
 svn propdel svn:mergeinfo -R
 svn revert .
 svn commit -m appropriate message
 {code}
 (To be done on branch-2 and trunk both)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-07-23 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8614:
-

Attachment: HADOOP-8614.001.patch

 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-07-23 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8614:
-

Status: Patch Available  (was: Open)

 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-07-23 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420882#comment-13420882
 ] 

Sanjay Radia commented on HADOOP-7967:
--

I think you are right: (getCanonicalName() != null) = FS has token

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.patch, 
 HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8435) Propdel all svn:mergeinfo

2012-07-23 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420887#comment-13420887
 ] 

Eli Collins commented on HADOOP-8435:
-

Ah, sorry, I missed the svn revert ..  I'm OK with this.

 Propdel all svn:mergeinfo
 -

 Key: HADOOP-8435
 URL: https://issues.apache.org/jira/browse/HADOOP-8435
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Harsh J
Assignee: Harsh J

 TortoiseSVN/some versions of svn have added several mergeinfo props to 
 Hadoop's svn files/dirs (list below).
 We should propdel that unneeded property, and fix it up. This otherwise 
 causes pain to those who backport with a simple root-dir-down command (svn 
 merge -c num url/path).
 We should also make sure to update the HowToCommit page on advising to avoid 
 mergeinfo additions to prevent this from reoccurring.
 Files affected are, from my propdel revert output earlier today:
 {code}
 Reverted '.'
 Reverted 'hadoop-hdfs-project'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/java'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary'
 Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/native'
 Reverted 'hadoop-mapreduce-project'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site'
 Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt'
 Reverted 'hadoop-mapreduce-project/conf'
 Reverted 'hadoop-mapreduce-project/CHANGES.txt'
 Reverted 'hadoop-mapreduce-project/src/test/mapred'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs'
 Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc'
 Reverted 'hadoop-mapreduce-project/src/contrib'
 Reverted 'hadoop-mapreduce-project/src/contrib/eclipse-plugin'
 Reverted 'hadoop-mapreduce-project/src/contrib/block_forensics'
 Reverted 'hadoop-mapreduce-project/src/contrib/index'
 Reverted 'hadoop-mapreduce-project/src/contrib/data_join'
 Reverted 'hadoop-mapreduce-project/src/contrib/build-contrib.xml'
 Reverted 'hadoop-mapreduce-project/src/contrib/vaidya'
 Reverted 'hadoop-mapreduce-project/src/contrib/build.xml'
 Reverted 'hadoop-mapreduce-project/src/java'
 Reverted 'hadoop-mapreduce-project/src/webapps/job'
 Reverted 'hadoop-mapreduce-project/src/c++'
 Reverted 'hadoop-mapreduce-project/src/examples'
 Reverted 'hadoop-mapreduce-project/hadoop-mapreduce-examples'
 Reverted 
 'hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml'
 Reverted 'hadoop-mapreduce-project/bin'
 Reverted 'hadoop-common-project'
 Reverted 'hadoop-common-project/hadoop-common'
 Reverted 'hadoop-common-project/hadoop-common/src/test/core'
 Reverted 'hadoop-common-project/hadoop-common/src/main/java'
 Reverted 'hadoop-common-project/hadoop-common/src/main/docs'
 Reverted 'hadoop-common-project/hadoop-auth'
 Reverted 'hadoop-project'
 Reverted 'hadoop-project/src/site'
 {code}
 Proposed set of fix (from 
 http://stackoverflow.com/questions/767418/remove-unnecessary-svnmergeinfo-properties):
 {code}
 svn propdel svn:mergeinfo -R
 svn revert .
 svn commit -m appropriate message
 {code}
 (To be done on branch-2 and trunk both)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-07-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420896#comment-13420896
 ] 

Hadoop QA commented on HADOOP-8614:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12537591/HADOOP-8614.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1210//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1210//console

This message is automatically generated.

 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8612) Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8612:
---

Assignee: Eli Collins

 Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)
 --

 Key: HADOOP-8612
 URL: https://issues.apache.org/jira/browse/HADOOP-8612
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Matt Foley
Assignee: Eli Collins
 Attachments: hadoop-8599-b1.txt


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. See HADOOP-8599 
 for details and tiny patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8612) Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8612:


Attachment: hadoop-8599-b1.txt

Patch attached. Same as the trunk version.

 Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)
 --

 Key: HADOOP-8612
 URL: https://issues.apache.org/jira/browse/HADOOP-8612
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Matt Foley
Assignee: Eli Collins
 Attachments: hadoop-8599-b1.txt


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. See HADOOP-8599 
 for details and tiny patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-07-23 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-7967:


Attachment: HADOOP-7967.newapi.2.patch

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8615) EOFException in DecompressorStream.java needs to be more verbose

2012-07-23 Thread Jeff Lord (JIRA)
Jeff Lord created HADOOP-8615:
-

 Summary: EOFException in DecompressorStream.java needs to be more 
verbose
 Key: HADOOP-8615
 URL: https://issues.apache.org/jira/browse/HADOOP-8615
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2
Reporter: Jeff Lord


In ./src/core/org/apache/hadoop/io/compress/DecompressorStream.java

The following exception should at least pass back the file that it encounters 
this error in relation to:

  protected void getCompressedData() throws IOException {
checkStream();

int n = in.read(buffer, 0, buffer.length);
if (n == -1) {
  throw new EOFException(Unexpected end of input stream);
}


This would help greatly to debug bad/corrupt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8613) AbstractDelegationTokenIdentifier#getUser() should set token auth type

2012-07-23 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8613:


Attachment: HADOOP-8613.patch
HADOOP-8613.branch-1.patch

 AbstractDelegationTokenIdentifier#getUser() should set token auth type
 --

 Key: HADOOP-8613
 URL: https://issues.apache.org/jira/browse/HADOOP-8613
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8613.branch-1.patch, HADOOP-8613.patch


 {{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated 
 with a token.  The UGI's auth type will either be SIMPLE for non-proxy 
 tokens, or PROXY (effective user) and SIMPLE (real user).  Instead of SIMPLE, 
 it needs to be TOKEN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8613) AbstractDelegationTokenIdentifier#getUser() should set token auth type

2012-07-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420961#comment-13420961
 ] 

Hadoop QA commented on HADOOP-8613:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12537606/HADOOP-8613.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1212//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1212//console

This message is automatically generated.

 AbstractDelegationTokenIdentifier#getUser() should set token auth type
 --

 Key: HADOOP-8613
 URL: https://issues.apache.org/jira/browse/HADOOP-8613
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8613.branch-1.patch, HADOOP-8613.patch


 {{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated 
 with a token.  The UGI's auth type will either be SIMPLE for non-proxy 
 tokens, or PROXY (effective user) and SIMPLE (real user).  Instead of SIMPLE, 
 it needs to be TOKEN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8612) Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)

2012-07-23 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420962#comment-13420962
 ] 

Todd Lipcon commented on HADOOP-8612:
-

+1

 Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)
 --

 Key: HADOOP-8612
 URL: https://issues.apache.org/jira/browse/HADOOP-8612
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Matt Foley
Assignee: Eli Collins
 Attachments: hadoop-8599-b1.txt


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. See HADOOP-8599 
 for details and tiny patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8612) Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-8612.
-

  Resolution: Fixed
   Fix Version/s: 1.2.0
Target Version/s:   (was: 1.2.0)

I've committed this, thanks for the review Todd.

 Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)
 --

 Key: HADOOP-8612
 URL: https://issues.apache.org/jira/browse/HADOOP-8612
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Matt Foley
Assignee: Eli Collins
 Fix For: 1.2.0

 Attachments: hadoop-8599-b1.txt


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. See HADOOP-8599 
 for details and tiny patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8599) Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file

2012-07-23 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420983#comment-13420983
 ] 

Kihwal Lee commented on HADOOP-8599:


I think TestCombineFileInputFormat.testForEmptyFile started failing after this. 
The split size on an empty input file used to be 1, but it's now 0.

 Non empty response from FileSystem.getFileBlockLocations when asking for data 
 beyond the end of file 
 -

 Key: HADOOP-8599
 URL: https://issues.apache.org/jira/browse/HADOOP-8599
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8859-branch-0.23.patch


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. There is a test 
 TestGetFileBlockLocations.testGetFileBlockLocations2 which uses randomly 
 generated start and len arguments when calling 
 FileSystem.getFileBlockLocations and the test fails randomly (when the 
 generated start value equals to the file size).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8599) Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file

2012-07-23 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13420985#comment-13420985
 ] 

Kihwal Lee commented on HADOOP-8599:


MAPREDUCE-4470 has been filed.

 Non empty response from FileSystem.getFileBlockLocations when asking for data 
 beyond the end of file 
 -

 Key: HADOOP-8599
 URL: https://issues.apache.org/jira/browse/HADOOP-8599
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8859-branch-0.23.patch


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. There is a test 
 TestGetFileBlockLocations.testGetFileBlockLocations2 which uses randomly 
 generated start and len arguments when calling 
 FileSystem.getFileBlockLocations and the test fails randomly (when the 
 generated start value equals to the file size).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8431) Running distcp wo args throws IllegalArgumentException

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reopened HADOOP-8431:
-


I just tried this and confirmed it still fails with the latest build. In the 
future please try to reproduce the issue before you close it.

hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
12/07/23 19:21:48 ERROR tools.DistCp: Invalid arguments: 
java.lang.IllegalArgumentException: Target path not specified
at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)


 Running distcp wo args throws IllegalArgumentException
 --

 Key: HADOOP-8431
 URL: https://issues.apache.org/jira/browse/HADOOP-8431
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie

 Running distcp w/o args results in the following:
 {noformat}
 hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
 12/05/23 18:49:04 ERROR tools.DistCp: Invalid arguments: 
 java.lang.IllegalArgumentException: Target path not specified
   at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
   at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
 Invalid arguments: Target path not specified
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8431) Running distcp wo args throws IllegalArgumentException

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8431:


Target Version/s: 2.2.0-alpha  (was: 2.1.0-alpha)

 Running distcp wo args throws IllegalArgumentException
 --

 Key: HADOOP-8431
 URL: https://issues.apache.org/jira/browse/HADOOP-8431
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie

 Running distcp w/o args results in the following:
 {noformat}
 hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
 12/05/23 18:49:04 ERROR tools.DistCp: Invalid arguments: 
 java.lang.IllegalArgumentException: Target path not specified
   at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
   at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
 Invalid arguments: Target path not specified
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8568) DNS#reverseDns fails on IPv6 addresses

2012-07-23 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8568:


Assignee: Karthik Kambatla

 DNS#reverseDns fails on IPv6 addresses
 --

 Key: HADOOP-8568
 URL: https://issues.apache.org/jira/browse/HADOOP-8568
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Karthik Kambatla
  Labels: newbie

 DNS#reverseDns assumes hostIp is a v4 address (4 parts separated by dots), 
 blows up if given a v6 address:
 {noformat}
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
 at org.apache.hadoop.net.DNS.reverseDns(DNS.java:79)
 at org.apache.hadoop.net.DNS.getHosts(DNS.java:237)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:340)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:358)
 at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:337)
 at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:235)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1649)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8337) Clarify the usage documentation for fetchdt

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8337:


 Target Version/s: 2.2.0-alpha
Affects Version/s: 2.0.0-alpha

 Clarify the usage documentation for fetchdt
 ---

 Key: HADOOP-8337
 URL: https://issues.apache.org/jira/browse/HADOOP-8337
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Owen O'Malley
Assignee: Owen O'Malley

 The -webservice URL option in fetchdt is very fragile. In particular, it must 
 be precisely in the form of the http://nn:port and NOT include the 
 trailing slash. Furthermore, after HDFS-2617 the nn must be a lowercase 
 hostname and not an IP address.
 I propose at least documenting all of the restrictions...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-07-23 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8616:
---

 Summary: ViewFS configuration requires a trailing slash
 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Eli Collins


If the viewfs config doesn't have a trailing slash commands like the following 
fail:

{noformat}
bash-3.2$ hadoop fs -ls
-ls: Can not create a Path from an empty string
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
{noformat}

We hit this problem with the following configuration because hdfs://ha-nn-uri 
does not have a trailing /.

{noformat}
  property
  namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
  valuehdfs://ha-nn-uri/value
  /property
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8616) ViewFS configuration requires a trailing slash

2012-07-23 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13421136#comment-13421136
 ] 

Eli Collins commented on HADOOP-8616:
-

Thanks to Stephen Chu for identifying this bug.

 ViewFS configuration requires a trailing slash
 --

 Key: HADOOP-8616
 URL: https://issues.apache.org/jira/browse/HADOOP-8616
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Eli Collins

 If the viewfs config doesn't have a trailing slash commands like the 
 following fail:
 {noformat}
 bash-3.2$ hadoop fs -ls
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {noformat}
 We hit this problem with the following configuration because 
 hdfs://ha-nn-uri does not have a trailing /.
 {noformat}
   property
   namefs.viewfs.mounttable.foo.link./nameservices/ha-nn-uri/name
   valuehdfs://ha-nn-uri/value
   /property
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8369) Failing tests in branch-2

2012-07-23 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8369:
---

Assignee: (was: Eli Collins)

 Failing tests in branch-2
 -

 Key: HADOOP-8369
 URL: https://issues.apache.org/jira/browse/HADOOP-8369
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
 Fix For: 2.1.0-alpha


 Running org.apache.hadoop.io.compress.TestCodec
 Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
 Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec  
 FAILURE!
 
 TestCodec failed since I didn't pass -Pnative, the test could be improved to 
 ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira