Jenkins build is back to normal : Hadoop-common-trunk-Java8 #201

2015-05-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/201/changes



Re: Protocol Buffers version

2015-05-19 Thread Sangjin Lee
I pushed it out to a github fork:
https://github.com/sjlee/protobuf/tree/2.5.0-incompatibility

We haven't observed other compatibility issues than these.

On Tue, May 19, 2015 at 10:05 PM, Chris Nauroth cnaur...@hortonworks.com
wrote:

 Thanks, Sangjin.  I'd be interested in taking a peek at a personal GitHub
 repo or even just a patch file of those changes.  If there were
 incompatibilities, then that doesn't bode well for an upgrade to 2.6.

 --Chris Nauroth




 On 5/19/15, 8:40 PM, Sangjin Lee sj...@apache.org wrote:

 When we moved to Hadoop 2.4, the associated protobuf upgrade (2.4.1 -
 2.5.0) proved to be one of the bigger problems. In our case, most of our
 users were using protobuf 2.4.x or earlier.
 
 We identified a couple of places where the backward compatibility was
 broken, and patched for those issues. We've been running with that patched
 version of protobuf 2.5.0 since. I can push out those changes to github or
 something if others are interested FWIW.
 
 Regards,
 Sangjin
 
 On Tue, May 19, 2015 at 9:59 AM, Colin P. McCabe cmcc...@apache.org
 wrote:
 
  I agree that the protobuf 2.4.1 - 2.5.0 transition could have been
  handled a lot better by Google.  Specifically, since it was an
  API-breaking upgrade, it should have been a major version bump for the
  Java library version.  I also feel that removing the download links
  for the old versions of the native libraries was careless, and
  certainly burned some of our Hadoop users.
 
  However, I don't see any reason to believe that protobuf 2.6 will not
  be wire-compatible with earlier versions.  Google has actually been
  pretty good about preserving wire-compatibility... just not about API
  compatibility.  If we want to get a formal statement from the project,
  we can, but I would be pretty shocked if they decided to change the
  protocol in a backwards-incompatible way in a minor version release.
 
  I do think there are some potential issues for our users of bumping
  the library version in a minor Hadoop release.  Until we implement
  full dependency isolation for Hadoop, there may be some disruptions to
  end-users from changing Java dependency versions.  Similarly, users
  will need to install a new native protobuf library version as well.
  So I think we should bump the protobuf versions in Hadoop 3.0, but not
  in 2.x.
 
  cheers,
  Colin
 
  On Fri, May 15, 2015 at 4:55 AM, Alan Burlison
 alan.burli...@oracle.com
  wrote:
   On 15/05/2015 09:44, Steve Loughran wrote:
  
   Now: why do you want to use a later version of protobuf.jar? Is it
   because it is there? Or is there a tangible need?
  
  
   No, it's because I'm looking at this from a platform perspective: We
 have
   other consumers of ProtoBuf beside Hadoop and we'd obviously like to
   minimise the versions of PB that we ship, and preferably just ship the
   latest version. The fact that PB seems to often be incompatible across
   releases is an issue as it makes upgrading and dropping older versions
   problematic.
  
   --
   Alan Burlison
   --
 




Re: Protocol Buffers version

2015-05-19 Thread Chris Nauroth
Thanks, Sangjin.  I'd be interested in taking a peek at a personal GitHub
repo or even just a patch file of those changes.  If there were
incompatibilities, then that doesn't bode well for an upgrade to 2.6.

--Chris Nauroth




On 5/19/15, 8:40 PM, Sangjin Lee sj...@apache.org wrote:

When we moved to Hadoop 2.4, the associated protobuf upgrade (2.4.1 -
2.5.0) proved to be one of the bigger problems. In our case, most of our
users were using protobuf 2.4.x or earlier.

We identified a couple of places where the backward compatibility was
broken, and patched for those issues. We've been running with that patched
version of protobuf 2.5.0 since. I can push out those changes to github or
something if others are interested FWIW.

Regards,
Sangjin

On Tue, May 19, 2015 at 9:59 AM, Colin P. McCabe cmcc...@apache.org
wrote:

 I agree that the protobuf 2.4.1 - 2.5.0 transition could have been
 handled a lot better by Google.  Specifically, since it was an
 API-breaking upgrade, it should have been a major version bump for the
 Java library version.  I also feel that removing the download links
 for the old versions of the native libraries was careless, and
 certainly burned some of our Hadoop users.

 However, I don't see any reason to believe that protobuf 2.6 will not
 be wire-compatible with earlier versions.  Google has actually been
 pretty good about preserving wire-compatibility... just not about API
 compatibility.  If we want to get a formal statement from the project,
 we can, but I would be pretty shocked if they decided to change the
 protocol in a backwards-incompatible way in a minor version release.

 I do think there are some potential issues for our users of bumping
 the library version in a minor Hadoop release.  Until we implement
 full dependency isolation for Hadoop, there may be some disruptions to
 end-users from changing Java dependency versions.  Similarly, users
 will need to install a new native protobuf library version as well.
 So I think we should bump the protobuf versions in Hadoop 3.0, but not
 in 2.x.

 cheers,
 Colin

 On Fri, May 15, 2015 at 4:55 AM, Alan Burlison
alan.burli...@oracle.com
 wrote:
  On 15/05/2015 09:44, Steve Loughran wrote:
 
  Now: why do you want to use a later version of protobuf.jar? Is it
  because it is there? Or is there a tangible need?
 
 
  No, it's because I'm looking at this from a platform perspective: We
have
  other consumers of ProtoBuf beside Hadoop and we'd obviously like to
  minimise the versions of PB that we ship, and preferably just ship the
  latest version. The fact that PB seems to often be incompatible across
  releases is an issue as it makes upgrading and dropping older versions
  problematic.
 
  --
  Alan Burlison
  --




Re: Protocol Buffers version

2015-05-19 Thread Sangjin Lee
When we moved to Hadoop 2.4, the associated protobuf upgrade (2.4.1 -
2.5.0) proved to be one of the bigger problems. In our case, most of our
users were using protobuf 2.4.x or earlier.

We identified a couple of places where the backward compatibility was
broken, and patched for those issues. We've been running with that patched
version of protobuf 2.5.0 since. I can push out those changes to github or
something if others are interested FWIW.

Regards,
Sangjin

On Tue, May 19, 2015 at 9:59 AM, Colin P. McCabe cmcc...@apache.org wrote:

 I agree that the protobuf 2.4.1 - 2.5.0 transition could have been
 handled a lot better by Google.  Specifically, since it was an
 API-breaking upgrade, it should have been a major version bump for the
 Java library version.  I also feel that removing the download links
 for the old versions of the native libraries was careless, and
 certainly burned some of our Hadoop users.

 However, I don't see any reason to believe that protobuf 2.6 will not
 be wire-compatible with earlier versions.  Google has actually been
 pretty good about preserving wire-compatibility... just not about API
 compatibility.  If we want to get a formal statement from the project,
 we can, but I would be pretty shocked if they decided to change the
 protocol in a backwards-incompatible way in a minor version release.

 I do think there are some potential issues for our users of bumping
 the library version in a minor Hadoop release.  Until we implement
 full dependency isolation for Hadoop, there may be some disruptions to
 end-users from changing Java dependency versions.  Similarly, users
 will need to install a new native protobuf library version as well.
 So I think we should bump the protobuf versions in Hadoop 3.0, but not
 in 2.x.

 cheers,
 Colin

 On Fri, May 15, 2015 at 4:55 AM, Alan Burlison alan.burli...@oracle.com
 wrote:
  On 15/05/2015 09:44, Steve Loughran wrote:
 
  Now: why do you want to use a later version of protobuf.jar? Is it
  because it is there? Or is there a tangible need?
 
 
  No, it's because I'm looking at this from a platform perspective: We have
  other consumers of ProtoBuf beside Hadoop and we'd obviously like to
  minimise the versions of PB that we ship, and preferably just ship the
  latest version. The fact that PB seems to often be incompatible across
  releases is an issue as it makes upgrading and dropping older versions
  problematic.
 
  --
  Alan Burlison
  --



[jira] [Created] (HADOOP-12003) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-05-19 Thread Vinoth Sathappan (JIRA)
Vinoth Sathappan created HADOOP-12003:
-

 Summary: createNonRecursive support needed in WebHdfsFileSystem to 
support HBase
 Key: HADOOP-12003
 URL: https://issues.apache.org/jira/browse/HADOOP-12003
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Vinoth Sathappan


The WebHdfsFileSystem implementation doesn't support createNonRecursive. HBase 
extensively depends on that for proper functioning. Currently, when the region 
servers are started over web hdfs, they crash due with -

createNonRecursive unsupported for this filesystem class 
org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
at 
org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
at 
org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11996) Native erasure coder basic facilities with an illustration sample

2015-05-19 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11996:
--

 Summary: Native erasure coder basic facilities with an 
illustration sample
 Key: HADOOP-11996
 URL: https://issues.apache.org/jira/browse/HADOOP-11996
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


While working on HADOOP-11540 and etc., it was found useful to write the basic 
facilities based on Intel ISA-L library separately from JNI stuff so they can 
be utilized to compose a useful sample coder. Such sample coder can serve as a 
good illustration for how to use the ISA-L library, meanwhile it's easy to 
debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-19 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-11997:
--

 Summary: CMake CMAKE_C_FLAGS are non-portable
 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical


hadoop-common-project/hadoop-common/src/CMakeLists.txt 
(https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
 contains the following unconditional assignments to CMAKE_C_FLAGS:

set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64)

There are several issues here:

1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
hadoop-common native source. This is probably a major contributor to the poor 
cross-platform portability of Hadoop native code to non-Linux platforms as it 
makes it easy for developers to use non-portable Linux features without 
realising. Use of Linux-specific features should be correctly bracketed with 
conditional macro blocks that provide an alternative for non-Linux platforms.

2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
necessary to override CFLAGS it should probably be done conditionally dependent 
on the value of CMAKE_BUILD_TYPE.

3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are only 
needed for largefile support in ILP32 applications, LP64 applications are 
largefile by default. I believe the same is true on Linux, so these flags are 
harmless but redundant for 64-bit compilation.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: JNIFlags.cmake versus FindJNI.cmake

2015-05-19 Thread Alan Burlison

On 19/05/2015 17:34, Colin P. McCabe wrote:


Thanks for looking at this code, Alan.  I appreciate your desire to
clean things up.  However, as I commented on HADOOP-11987,
JNIFlags.cmake contains many fixes not in the standard FindJNI.cmake.
I very much doubt that we will be able to remove it without causing
significant regressions.  I guess we can continue the discussion on
the JIRA.


Sure, I'm typing up some comments right now :-)

Thanks,

--
Alan Burlison
--


Re: Protocol Buffers version

2015-05-19 Thread Colin P. McCabe
I agree that the protobuf 2.4.1 - 2.5.0 transition could have been
handled a lot better by Google.  Specifically, since it was an
API-breaking upgrade, it should have been a major version bump for the
Java library version.  I also feel that removing the download links
for the old versions of the native libraries was careless, and
certainly burned some of our Hadoop users.

However, I don't see any reason to believe that protobuf 2.6 will not
be wire-compatible with earlier versions.  Google has actually been
pretty good about preserving wire-compatibility... just not about API
compatibility.  If we want to get a formal statement from the project,
we can, but I would be pretty shocked if they decided to change the
protocol in a backwards-incompatible way in a minor version release.

I do think there are some potential issues for our users of bumping
the library version in a minor Hadoop release.  Until we implement
full dependency isolation for Hadoop, there may be some disruptions to
end-users from changing Java dependency versions.  Similarly, users
will need to install a new native protobuf library version as well.
So I think we should bump the protobuf versions in Hadoop 3.0, but not
in 2.x.

cheers,
Colin

On Fri, May 15, 2015 at 4:55 AM, Alan Burlison alan.burli...@oracle.com wrote:
 On 15/05/2015 09:44, Steve Loughran wrote:

 Now: why do you want to use a later version of protobuf.jar? Is it
 because it is there? Or is there a tangible need?


 No, it's because I'm looking at this from a platform perspective: We have
 other consumers of ProtoBuf beside Hadoop and we'd obviously like to
 minimise the versions of PB that we ship, and preferably just ship the
 latest version. The fact that PB seems to often be incompatible across
 releases is an issue as it makes upgrading and dropping older versions
 problematic.

 --
 Alan Burlison
 --


[jira] [Created] (HADOOP-12000) cannot use --java-home in test-patch

2015-05-19 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12000:
-

 Summary: cannot use --java-home in test-patch
 Key: HADOOP-12000
 URL: https://issues.apache.org/jira/browse/HADOOP-12000
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


Trivial bug, but breaks the docker patch:  --java-home=blah doesn't work 
because the case statement is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12001) Limiting LDAP search conflicts with posixGroup addition

2015-05-19 Thread Patrick White (JIRA)
Patrick White created HADOOP-12001:
--

 Summary: Limiting LDAP search conflicts with posixGroup addition
 Key: HADOOP-12001
 URL: https://issues.apache.org/jira/browse/HADOOP-12001
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.8.0
Reporter: Patrick White


In HADOOP-9477, posixGroup support was added
In HADOOP-10626, a limit on the returned attributes was added to speed up 
queries.

Limiting the attributes can break the SEARCH_CONTROLS object in the context of 
the isPosix block, since it only asks LDAP for the groupNameAttr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12002) test-patch.sh incorrectly reports findbugs failure(s) as a 'success' on Mac OS X

2015-05-19 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created HADOOP-12002:
--

 Summary: test-patch.sh incorrectly reports findbugs failure(s) as 
a 'success' on Mac OS X
 Key: HADOOP-12002
 URL: https://issues.apache.org/jira/browse/HADOOP-12002
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sidharta Seethana


{{test-patch.sh}} was used with {{FINDBUGS_HOME}} set. See below for an example 
- there were 4 findbugs warnings generated - however, {{test-patch.sh}} doesn't 
seem to realize that there are missing findbugs tools and +1s the finbugs 
check. 

{quote}
 Running findbugs in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
mvn clean test findbugs:findbugs -DskipTests -DhadoopPatchProcess  
/private/tmp/hadoop-test-patch/71089/patchFindBugsOutputhadoop-yarn-server-nodemanager.txt
 21
sniphadoop/dev-support/test-patch.sh: line 1907: 
/usr/local/Cellar/findbugs/3.0.0/bin/setBugDatabaseInfo: No such file or 
directory
sniphadoop/dev-support/test-patch.sh: line 1915: 
/usr/local/Cellar/findbugs/3.0.0/bin/filterBugs: No such file or directory
Found  Findbugs warnings 
(sniphadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/findbugsXml.xml)
sniphadoop/dev-support/test-patch.sh: line 1921: 
/usr/local/Cellar/findbugs/3.0.0/bin/convertXmlToText: No such file or directory
[Mon May 18 18:08:52 PDT 2015 DEBUG]: Stop clock

Elapsed time:   0m 38s
{quote}

Findbugs check reported as successful : 

{quote}
|  +1  |   findbugs  |  0m 38s| The patch does not introduce any 
|  | || new Findbugs (version 3.0.0)
|  | || warnings.
|  | |  23m 51s   | 
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11999) yarn.resourcemanager.webapp.address and friends are converted to fqdn in the href urls of the web app.

2015-05-19 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-11999:
---

 Summary: yarn.resourcemanager.webapp.address and friends are 
converted to fqdn in the href urls of the web app.
 Key: HADOOP-11999
 URL: https://issues.apache.org/jira/browse/HADOOP-11999
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
 Environment: Linux.
Reporter: Ewan Higgs


I am setting up a Hadoop cluster where the nodes have FQDNames inside
the cluster, but the DNS where these names are registered is behind some
login nodes. So any user who tries to access the web interface needs to
use the IPs instead.

I set the 'yarn.nodemanager.webapp.address' and
'yarn.resourcemanager.webapp.address' to the appropriate IP:port. I
don't give it the FQDN in this config field.

However, when I access the web app it all works inside each web app.
However, when I cross from the Resource Manager to the Node Manager web
app, the href url uses the FQDN that I don't want. Obviously this is a
dead link to the user and can only be fixed if they copy and paste the
appropriate IP address for the node (not a pleasant user experience).

I suppose it makes sense to use the FQDN for the link text in the web app, but 
not the actual url if the IP was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: JNIFlags.cmake versus FindJNI.cmake

2015-05-19 Thread Colin P. McCabe
Thanks for looking at this code, Alan.  I appreciate your desire to
clean things up.  However, as I commented on HADOOP-11987,
JNIFlags.cmake contains many fixes not in the standard FindJNI.cmake.
I very much doubt that we will be able to remove it without causing
significant regressions.  I guess we can continue the discussion on
the JIRA.

Colin

On Sat, May 16, 2015 at 12:36 AM, Alan Burlison
alan.burli...@oracle.com wrote:
 On 16/05/2015 03:24, Mai Haohui wrote:

 Just checked the repo of cmake and it turns out that FindJNI.cmake is
 available even before cmake 2.4. I think it makes sense to file a bug
 to replace it to the standard cmake module. Can you please file a jira
 for this?


 Will do, thanks.

 --
 Alan Burlison
 --


[jira] [Resolved] (HADOOP-11314) Fixing warnings by apt parser

2015-05-19 Thread Darrell Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrell Taylor resolved HADOOP-11314.
-
Resolution: Implemented

 Fixing warnings by apt parser
 -

 Key: HADOOP-11314
 URL: https://issues.apache.org/jira/browse/HADOOP-11314
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Tsuyoshi Ozawa
  Labels: newbie

 {{mvn site:stage}} show lots warnings as follows:
 {code}
 $mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site;
 [WARNING] [APT Parser] Modified invalid link: 'Set Storage Policy' to 
 '#Set_Storage_Policy'
 [WARNING] [APT Parser] Modified invalid link: 'OS Limits' to '#OS_Limits'
 [WARNING] [APT Parser] Modified invalid link: 'Administration Commands' to 
 '#Administration_Commands'
 [WARNING] [APT Parser] Modified invalid link: 'User Commands' to 
 '#User_Commands'
 [WARNING] [APT Parser] Modified invalid link: 'Hadoop+Rack+Awareness' to 
 '../hadoop-common/ClusterSetup.html#HadoopRackAwareness'
 [WARNING] [APT Parser] Modified invalid link: 'oiv_legacy Command' to 
 '#oiv_legacy_Command'
 [WARNING] [APT Parser] Modified invalid link: '-px' to '#a-px'
 [WARNING] [APT Parser] Modified invalid link: '-skipcrccheck' to 
 '#a-skipcrccheck'
 [WARNING] [APT Parser] Modified invalid link: '-update' to '#a-update'
 [WARNING] [APT Parser] Modified invalid link: 'Single Namenode Clusters' to 
 '#Single_Namenode_Clusters'
 [WARNING] [APT Parser] Modified invalid link: 'ACL Status JSON Schema' to 
 '#ACL_Status_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'Access Time' to '#Access_Time'
 [WARNING] [APT Parser] Modified invalid link: 'Append to a File' to 
 '#Append_to_a_File'
 [WARNING] [APT Parser] Modified invalid link: 'Block Size' to '#Block_Size'
 [WARNING] [APT Parser] Modified invalid link: 'Boolean JSON Schema' to 
 '#Boolean_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'Buffer Size' to '#Buffer_Size'
 [WARNING] [APT Parser] Modified invalid link: 'Cancel Delegation Token' to 
 '#Cancel_Delegation_Token'
 [WARNING] [APT Parser] Modified invalid link: 'Check access' to 
 '#Check_access'
 [WARNING] [APT Parser] Modified invalid link: 'Concat File(s)' to 
 '#Concat_Files'
 [WARNING] [APT Parser] Modified invalid link: 'ContentSummary JSON Schema' to 
 '#ContentSummary_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'Create Parent' to 
 '#Create_Parent'
 [WARNING] [APT Parser] Modified invalid link: 'Create Snapshot' to 
 '#Create_Snapshot'
 [WARNING] [APT Parser] Modified invalid link: 'Create a Symbolic Link' to 
 '#Create_a_Symbolic_Link'
 [WARNING] [APT Parser] Modified invalid link: 'Create and Write to a File' to 
 '#Create_and_Write_to_a_File'
 [WARNING] [APT Parser] Modified invalid link: 'Delete Snapshot' to 
 '#Delete_Snapshot'
 [WARNING] [APT Parser] Modified invalid link: 'Delete a File/Directory' to 
 '#Delete_a_FileDirectory'
 [WARNING] [APT Parser] Modified invalid link: 'Error Responses' to 
 '#Error_Responses'
 [WARNING] [APT Parser] Modified invalid link: 'FileChecksum JSON Schema' to 
 '#FileChecksum_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'FileStatus JSON Schema' to 
 '#FileStatus_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'FileStatus Properties' to 
 '#FileStatus_Properties'
 [WARNING] [APT Parser] Modified invalid link: 'FileStatuses JSON Schema' to 
 '#FileStatuses_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'Get Content Summary of a 
 Directory' to '#Get_Content_Summary_of_a_Directory'
 [WARNING] [APT Parser] Modified invalid link: 'Get Delegation Token' to 
 '#Get_Delegation_Token'
 [WARNING] [APT Parser] Modified invalid link: 'Get Delegation Tokens' to 
 '#Get_Delegation_Tokens'
 [WARNING] [APT Parser] Modified invalid link: 'Get File Checksum' to 
 '#Get_File_Checksum'
 [WARNING] [APT Parser] Modified invalid link: 'Get Home Directory' to 
 '#Get_Home_Directory'
 [WARNING] [APT Parser] Modified invalid link: 'Get all XAttrs' to 
 '#Get_all_XAttrs'
 [WARNING] [APT Parser] Modified invalid link: 'Get an XAttr' to 
 '#Get_an_XAttr'
 [WARNING] [APT Parser] Modified invalid link: 'Get multiple XAttrs' to 
 '#Get_multiple_XAttrs'
 [WARNING] [APT Parser] Modified invalid link: 'HTTP Query Parameter 
 Dictionary' to '#HTTP_Query_Parameter_Dictionary'
 [WARNING] [APT Parser] Modified invalid link: 'List a Directory' to 
 '#List_a_Directory'
 [WARNING] [APT Parser] Modified invalid link: 'List all XAttrs' to 
 '#List_all_XAttrs'
 [WARNING] [APT Parser] Modified invalid link: 'Long JSON Schema' to 
 '#Long_JSON_Schema'
 [WARNING] [APT Parser] Modified invalid link: 'Make a Directory' to 
 '#Make_a_Directory'
 [WARNING] [APT Parser] Modified invalid link: 'Modification Time' to 
 '#Modification_Time'
 [WARNING] [APT 

[jira] [Resolved] (HADOOP-11972) hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.

2015-05-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11972.
---
Resolution: Duplicate

This is a legitimate problem. Duping to HDFS-5033

 hdfs dfs -copyFromLocal reports File Not Found instead of Permission Denied.
 

 Key: HADOOP-11972
 URL: https://issues.apache.org/jira/browse/HADOOP-11972
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
 Environment: Linux hadoop-8309-2.west.isilon.com 
 2.6.32-504.16.2.el6.centos.plus.x86_64 #1 SMP Wed Apr 22 00:59:31 UTC 2015 
 x86_64 x86_64 x86_64 GNU/Linux
Reporter: David Tucker

 userA creates a file in /home/userA with 700 permissions.
 userB tries to copy it to HDFS, and receives a No such file or directory 
 instead of Permission denied.
 [hrt_qa@hadoop-8309-2 ~]$ touch ./foo
 [hrt_qa@hadoop-8309-2 ~]$ ls -l ./foo
 -rw-r--r--. 1 hrt_qa users 0 May 14 16:09 ./foo
 [hrt_qa@hadoop-8309-2 ~]$ sudo su hbase
 [hbase@hadoop-8309-2 hrt_qa]$ ls -l ./foo
 ls: cannot access ./foo: Permission denied
 [hbase@hadoop-8309-2 hrt_qa]$ hdfs dfs -copyFromLocal ./foo /tmp/foo
 copyFromLocal: `./foo': No such file or directory



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)