[jira] [Created] (HADOOP-11502) SPNEGO to Web UI fails with headers larger than 4KB

2015-01-21 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11502:


 Summary: SPNEGO to Web UI fails with headers larger than 4KB
 Key: HADOOP-11502
 URL: https://issues.apache.org/jira/browse/HADOOP-11502
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


Thanks to [~bgooley] for reporting this:


When SSL and Kerberos Authentication is enabled for hadoop web GUIs, if the 
header is over 4KB in size, the browser shows a blank page.

Browser dev tools show that a 413 full head error is returned as a response 
from Jetty.

It seems that in HADOOP-8816, we only addressed non-ssl ports by setting 
ret.setHeaderBufferSize(1024*64);

However, with SSL enabled, we use SslSocketConnector() but don't set the 
HeaderBufferSize. I think this is why we are failing at the default Jetty max 
header buffer size of 4KB.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11404) Clarify the expected client Kerberos principal is null authorization message

2014-12-13 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11404:


 Summary: Clarify the expected client Kerberos principal is null 
authorization message
 Key: HADOOP-11404
 URL: https://issues.apache.org/jira/browse/HADOOP-11404
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


In {{ServiceAuthorizationManager#authorize}}, we throw an 
{{AuthorizationException}} with message expected client Kerberos principal is 
null when authorization fails.

However, this is a confusing log message, because it leads users to believe 
there was a Kerberos authentication problem, when in fact the the user could 
have authenticated successfully.

{code}
if((clientPrincipal != null  !clientPrincipal.equals(user.getUserName())) || 
   acls.length != 2  || !acls[0].isUserAllowed(user) || 
acls[1].isUserAllowed(user)) {
  AUDITLOG.warn(AUTHZ_FAILED_FOR + user +  for protocol= + protocol
  + , expected client Kerberos principal is  + clientPrincipal);
  throw new AuthorizationException(User  + user + 
   is not authorized for protocol  + protocol + 
  , expected client Kerberos principal is  + clientPrincipal);
}
AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user +  for protocol=+protocol);
{code}

In the above code, if clientPrincipal is null, then the user is authenticated 
successfully but denied by a configured ACL, not a Kerberos issue. We should 
improve this log message to state this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11291:


 Summary: Log the cause of SASL connection failures
 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


{{UGI#doAs}} will no longer log a PriviledgedActionException unless 
LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
decided that users calling {{UGI#doAs}} should be responsible for logging the 
error when catching an exception. Also, the log was confusing in certain 
situations (see more details in HADOOP-10015).

However, as Daryn noted, this log message was very helpful in cases of 
debugging security issues.

As an example, we would use to see this in the DN logs before HADOOP-10015:
{code}
2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Generic error 
(description in e-text) (60) - NO PREAUTH)]
2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
cause:java.io.IOException: Couldn't setup connection for 
hdfs/hosta@realm.com to hostB.com/101.01.010:8022
{code}

After the fix went in, the DN was upgraded, and only logs:
{code}
2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: hostB.com/101.01.010:8022
{code}

It'd be good to add more logging information about the cause of a SASL 
connection failure.

Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11277) hdfs dfs test command returning 0 if true. instead of what should be returning 1 if true.

2014-11-06 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu resolved HADOOP-11277.
--
Resolution: Invalid

Returning 0 (success) if true is the correct behavior. This is the same 
behavior as the Linux test command. Resolving.

 hdfs dfs test command  returning 0 if true. instead of what should be 
 returning 1 if true.
 --

 Key: HADOOP-11277
 URL: https://issues.apache.org/jira/browse/HADOOP-11277
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The CDH5 File System (FS) shell commands documentation has an error. The test 
 command lists  returning 0 if true.  for all options. Should be returning 
 1 if true.
 http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-common/FileSystemShell.html#ls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11267) TestSecurityUtil fails when run with JDK8 because of empty principal names

2014-11-04 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11267:


 Summary: TestSecurityUtil fails when run with JDK8 because of 
empty principal names
 Key: HADOOP-11267
 URL: https://issues.apache.org/jira/browse/HADOOP-11267
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.3.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


Running {{TestSecurityUtil}} on JDK8 will fail:

{code}
java.lang.IllegalArgumentException: Empty nameString not allowed
at 
sun.security.krb5.PrincipalName.validateNameStrings(PrincipalName.java:171)
at sun.security.krb5.PrincipalName.init(PrincipalName.java:393)
at sun.security.krb5.PrincipalName.init(PrincipalName.java:460)
at 
javax.security.auth.kerberos.KerberosPrincipal.init(KerberosPrincipal.java:120)
at 
org.apache.hadoop.security.TestSecurityUtil.isOriginalTGTReturnsCorrectValues(TestSecurityUtil.java:57)
{code}

In JDK8, PrincipalName checks that its name is not empty and throws an 
IllegalArgumentException if it is empty. This didn't happen in JDK6/7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10932) compile error on project Apache Hadoop OpenStack support

2014-08-03 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu reopened HADOOP-10932:
--


 compile error on project Apache Hadoop OpenStack support
 --

 Key: HADOOP-10932
 URL: https://issues.apache.org/jira/browse/HADOOP-10932
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build
Affects Versions: 3.0.0
 Environment:  SUSE Linux Enterprise Server 11 SP1  (x86_64)
Reporter: xukun
Priority: Minor
 Fix For: 3.0.0, 2.6.0


 compile hadoop, it has error as below:
 [ERROR] 
 /home/git_repo/hadoop-common/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java:[41,7]
  cannot access org.hamcrest.Matcher
 class file for org.hamcrest.Matcher not found
 public class SwiftTestUtils extends org.junit.Assert {



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10932) compile error on project Apache Hadoop OpenStack support

2014-08-03 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu resolved HADOOP-10932.
--

Resolution: Duplicate

Thanks for posting a patch in HADOOP-10931, [~xukun]. We should mark this as a 
duplicate of that JIRA instead of resolved/fixed, though. Doing so now.

 compile error on project Apache Hadoop OpenStack support
 --

 Key: HADOOP-10932
 URL: https://issues.apache.org/jira/browse/HADOOP-10932
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build
Affects Versions: 3.0.0
 Environment:  SUSE Linux Enterprise Server 11 SP1  (x86_64)
Reporter: xukun
Priority: Minor
 Fix For: 3.0.0, 2.6.0


 compile hadoop, it has error as below:
 [ERROR] 
 /home/git_repo/hadoop-common/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java:[41,7]
  cannot access org.hamcrest.Matcher
 class file for org.hamcrest.Matcher not found
 public class SwiftTestUtils extends org.junit.Assert {



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10913) Add a space between key name and extra messages in KMS audit log

2014-07-31 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-10913:


 Summary: Add a space between key name and extra messages in KMS 
audit log
 Key: HADOOP-10913
 URL: https://issues.apache.org/jira/browse/HADOOP-10913
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Trivial


In the KMS audit log, there is no space between key name and extra messages, so 
you'll see something like the following. Note the missing space between 
exampleKey and UserProvidedMaterial which can make it more difficult to 
parse through the audit log.

{code}
2014-07-31 08:47:33,248 Status:OK User:hdfs Op:CREATE_KEY 
Name:exampleKeyUserProvidedMaterial:false Description:null
{code}





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10897) Add hadoop key to main hadoop script usage

2014-07-28 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-10897:


 Summary: Add hadoop key to main hadoop script usage
 Key: HADOOP-10897
 URL: https://issues.apache.org/jira/browse/HADOOP-10897
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Stephen Chu
Assignee: Stephen Chu


HADOOP-10177 added the hadoop key CLI to manage keys using KeyProvider API.

Currently, the main hadoop script does not show the hadoop key category in 
its usage.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10844) Add common tests for ACLs in combination with viewfs.

2014-07-23 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu resolved HADOOP-10844.
--

Resolution: Duplicate

Resolving as duplicate of HADOOP-10845, which is the same issue. Most likely 
accidental double filing.

 Add common tests for ACLs in combination with viewfs.
 -

 Key: HADOOP-10844
 URL: https://issues.apache.org/jira/browse/HADOOP-10844
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Stephen Chu

 Add tests in Hadoop Common for the ACL APIs in combination with viewfs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10887) Add XAttrs to ViewFs and make XAttrs + ViewFileSystem internal dir behavior consistent

2014-07-23 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-10887:


 Summary: Add XAttrs to ViewFs and make XAttrs + ViewFileSystem 
internal dir behavior consistent
 Key: HADOOP-10887
 URL: https://issues.apache.org/jira/browse/HADOOP-10887
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu


This is very similar to the work done in HADOOP-10845 (Add common tests for 
ACLs in combination with viewfs)

Here we make the XAttrs + ViewFileSystem internal dir behavior consistent. 
Right now, when users attempt XAttr operation on an internal dir, they will get 
an UnsupportedOperationException. Instead, we should throw the 
ReadOnlyMountTable AccessControlException or the NotInMountPointException.

We also add the XAttrs APIs to ViewFs. This involves adding them to ChRootedFs 
as well. Also, {{listXAttrs}} is missing from FileContext, so we should add 
that in.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10870) Failed to load OpenSSL cipher error logs on systems with old openssl versions

2014-07-21 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-10870:


 Summary: Failed to load OpenSSL cipher error logs on systems with 
old openssl versions
 Key: HADOOP-10870
 URL: https://issues.apache.org/jira/browse/HADOOP-10870
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Stephen Chu


I built Hadoop from fs-encryption branch and deployed Hadoop (without enabling 
any security confs) on a Centos 6.4 VM with an old version of openssl.

{code}
[root@schu-enc hadoop-common]# rpm -qa | grep openssl
openssl-1.0.0-27.el6_4.2.x86_64
openssl-devel-1.0.0-27.el6_4.2.x86_64
{code}

When I try to do a simple hadoop fs -ls, I get
{code}
[hdfs@schu-enc hadoop-common]$ hadoop fs -ls
2014-07-21 19:35:14,486 ERROR [main] crypto.OpensslCipher 
(OpensslCipher.java:clinit(87)) - Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version of 
Openssl new enough?
at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
at 
org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:55)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:591)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:561)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2590)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2624)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2606)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:228)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:211)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
2014-07-21 19:35:14,495 WARN  [main] crypto.CryptoCodec 
(CryptoCodec.java:getInstance(66)) - Crypto codec 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.
{code}

hadoop checknative shows an error

{code}
[hdfs@schu-enc ~]$ hadoop checknative
2014-07-21 19:38:38,376 INFO  [main] bzip2.Bzip2Factory 
(Bzip2Factory.java:isNativeBzip2Loaded(70)) - Successfully loaded  initialized 
native-bzip2 library system-native
2014-07-21 19:38:38,395 INFO  [main] zlib.ZlibFactory 
(ZlibFactory.java:clinit(49)) - Successfully loaded  initialized native-zlib 
library
2014-07-21 19:38:38,411 ERROR [main] crypto.OpensslCipher 
(OpensslCipher.java:clinit(87)) - Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version of 
Openssl new enough?
at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
at 
org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
at 
org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:82)
Native library checking:
hadoop:  true /home/hdfs/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: false 
{code}

Thanks to cmccabe who identified this issue as a bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-9723) Improve error message when hadoop archive output path already exists

2013-07-11 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9723:
---

 Summary: Improve error message when hadoop archive output path 
already exists
 Key: HADOOP-9723
 URL: https://issues.apache.org/jira/browse/HADOOP-9723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Trivial


When creating a hadoop archive and specifying an output path of an already 
existing file, we get an Invalid Output error message.

{code}
[schu@hdfs-vanilla-1 ~]$ hadoop archive -archiveName foo.har -p /user/schu 
testDir1 /user/schu
Invalid Output: /user/schu/foo.har
{code}

This error can be improved to tell users immediately that the output path 
already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9724) Trying to access har files within a har file complains about no index

2013-07-11 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9724:
---

 Summary: Trying to access har files within a har file complains 
about no index
 Key: HADOOP-9724
 URL: https://issues.apache.org/jira/browse/HADOOP-9724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu
Priority: Minor


If a har file contains another har file, accessing the inner har file through 
FsShell will complain about no index file, even if the index file exists.

{code}
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls 
har:///user/schu/foo4.har/testDir1/testDir2/foo3.har
ls: Invalid path for the Har Filesystem. No index file in 
har:/user/schu/foo4.har/testDir1/testDir2/foo3.har
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls /user/schu/testDir1/testDir2/foo3.har
Found 4 items
-rw-r--r--   1 schu supergroup  0 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_SUCCESS
-rw-r--r--   5 schu supergroup 91 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_index
-rw-r--r--   5 schu supergroup 22 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/_masterindex
-rw-r--r--   1 schu supergroup  0 2013-07-10 23:22 
/user/schu/testDir1/testDir2/foo3.har/part-0
[schu@hdfs-vanilla-1 ~]$ hdfs dfs -ls 
har:///user/schu/testDir1/testDir2/foo3.har
Found 1 items
drwxr-xr-x   - schu supergroup  0 2013-07-10 23:22 
har:///user/schu/testDir1/testDir2/foo3.har/testDir1
[schu@hdfs-vanilla-1 ~]$ 
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9705) FsShell cp -p does not preserve directory attibutes

2013-07-07 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9705:
---

 Summary: FsShell cp -p does not preserve directory attibutes
 Key: HADOOP-9705
 URL: https://issues.apache.org/jira/browse/HADOOP-9705
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Stephen Chu


HADOOP-9338 added the -p flag to preserve file attributes when copying.

However, cp -p does not preserve directory attributes. It'd be useful to add 
this functionality.

For example, the following shows that the modified time is not preserved
{code}
[schu@hdfs-snapshots-1 ~]$ $HADOOP_HOME/bin/hdfs dfs -mkdir /user/schu/testDir1
[schu@hdfs-snapshots-1 ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/schu/
Found 1 items
drwxr-xr-x   - schu supergroup  0 2013-07-07 20:25 /user/schu/testDir1
[schu@hdfs-snapshots-1 ~]$ $HADOOP_HOME/bin/hdfs dfs -cp -p /user/schu/testDir1 
/user/schu/testDir2
[schu@hdfs-snapshots-1 ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/schu
Found 2 items
drwxr-xr-x   - schu supergroup  0 2013-07-07 20:25 /user/schu/testDir1
drwxr-xr-x   - schu supergroup  0 2013-07-07 20:35 /user/schu/testDir2
[schu@hdfs-snapshots-1 ~]$ 
{code}

The preserve logic is in CommandWithDestination#copyFileToTarget, which is only 
called with files.

{code}
  protected void processPath(PathData src, PathData dst) throws IOException {
if (src.stat.isSymlink()) {
  // TODO: remove when FileContext is supported, this needs to either   

  
  // copy the symlink or deref the symlink  

  
  throw new PathOperationException(src.toString());
} else if (src.stat.isFile()) {
  copyFileToTarget(src, dst);
} else if (src.stat.isDirectory()  !isRecursive()) {
  throw new PathIsDirectoryException(src.toString());
}
  }
{code}

{code}
  /**   

  
   * Copies the source file to the target.  

  
   * @param src item to copy

  
   * @param target where to copy the item   

  
   * @throws IOException if copy fails  

  
   */
  protected void copyFileToTarget(PathData src, PathData target) throws 
IOException {
src.fs.setVerifyChecksum(verifyChecksum);
if (src != null) {
  throw new PathExistsException(hi);
}
InputStream in = null;
try {
  in = src.fs.open(src.path);
  copyStreamToTarget(in, target);
  if(preserve) {
target.fs.setTimes(
  target.path,
  src.stat.getModificationTime(),
  src.stat.getAccessTime());
target.fs.setOwner(
  target.path,
  src.stat.getOwner(),
  src.stat.getGroup());
target.fs.setPermission(
  target.path,
  src.stat.getPermission());
System.out.println(Preserving);

if (src.fs.equals(target.fs)) {
System.out.println(Same filesystems);
  src.fs.preserveAttributes(src.path, target.path);
}
throw new IOException(hi);
  }
} finally {
  IOUtils.closeStream(in);
}
  }
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9004) Allow security unit tests to use external KDC

2012-11-01 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-9004:
---

 Summary: Allow security unit tests to use external KDC
 Key: HADOOP-9004
 URL: https://issues.apache.org/jira/browse/HADOOP-9004
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security, test
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
 Fix For: 3.0.0


I want to add the option of allowing security-related unit tests to use an 
external KDC.

In HADOOP-8078, we add the ability to start and use an ApacheDS KDC for 
security-related unit tests. It would be good to allow users to validate the 
use of their own KDC, keytabs, and principals and to test different KDCs and 
not rely on the ApacheDS KDC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8829) hadoop-auth doc doesn't generate settings correctly

2012-09-19 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-8829:
---

 Summary: hadoop-auth doc doesn't generate settings correctly
 Key: HADOOP-8829
 URL: https://issues.apache.org/jira/browse/HADOOP-8829
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.1-alpha
Reporter: Stephen Chu


http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-auth/Configuration.html

This page doesn't seem to be generating the example configurations properly.

{code}
web-app version=2.5 xmlns=http://java.sun.com/xml/ns/javaee;
...

filter
filter-namekerberosFilter/filter-name

filter-classorg.apache.hadoop.security.auth.server.AuthenticationFilter/filter-class
init-param
param-nametype/param-name
param-valuekerberos/param-value
/init-param
init-param
param-nametoken.validity/param-name
param-value30/param-value
/init-param
init-param
param-namecookie.domain/param-name
param-value.foo.com/param-value
/init-param
init-param
param-namecookie.path/param-name
param-value//param-value
/init-param
init-param
param-namekerberos.principal/param-name
param-valueHTTP/localhost@LOCALHOST/param-value
/init-param
init-param
param-namekerberos.keytab/param-name
param-value/tmp/auth.keytab/param-value
/init-param
/filter

filter-mapping
filter-namekerberosFilter/filter-name
url-pattern/kerberos/*/url-pattern
/filter-mapping

...
/web-app
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8479) update HDFS quotas guide: currently says setting quota fails if the directory would be in violation of the new quota

2012-06-04 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-8479:
---

 Summary: update HDFS quotas guide: currently says setting quota 
fails if the directory would be in violation of the new quota
 Key: HADOOP-8479
 URL: https://issues.apache.org/jira/browse/HADOOP-8479
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Stephen Chu


http://hadoop.apache.org/common/docs/r0.20.0/hdfs_quota_admin_guide.html

The guide says The attempt to set a quota fails if the directory would be in 
violation of the new quota for both Name Quotas and Space Quotas.

That doesn't seem to be the case, though. I can set the quota successfully even 
when the directory violates the new quota.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira