[jira] [Created] (HDFS-6870) Blocks and INodes could leak for Rename with overwrite flag

2014-08-19 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6870:


 Summary: Blocks and INodes could leak for Rename with overwrite 
flag
 Key: HDFS-6870
 URL: https://issues.apache.org/jira/browse/HDFS-6870
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu


Following code in FSDirectory#unprotectedRenameTo doesn't collect blocks and 
INodes for non snapshot path.
{code}
if (removedDst != null) {
  undoRemoveDst = false;
  if (removedNum > 0) {
BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
List removedINodes = new ChunkedArrayList();
filesDeleted = removedDst.cleanSubtree(Snapshot.CURRENT_STATE_ID,
dstIIP.getLatestSnapshotId(), collectedBlocks, removedINodes,
true).get(Quota.NAMESPACE);
getFSNamesystem().removePathAndBlocks(src, collectedBlocks,
removedINodes, false);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6871) Improve NameNode performance when creating file

2014-08-19 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6871:


 Summary: Improve NameNode performance when creating file  
 Key: HDFS-6871
 URL: https://issues.apache.org/jira/browse/HDFS-6871
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, performance
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Critical
 Fix For: 2.6.0


Creating file with overwrite flag will cause NN fall into flush edit logs and 
block other requests if the file exists.

When we create a file with overwrite flag (default is true) in HDFS, NN will 
remove original file if it exists. In FSNamesystem#startFileInternal, NN 
already holds the write lock, it calls {{deleteInt}} if the file exists, there 
is logSync in {{deleteInt}}. So in this case, logSync is under write lock, it 
will heavily affect the NN performance. 

We should ignore the force logSync in {{deleteInt}} in this case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6872) Fix TestOptionsParser

2014-08-19 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6872:
--

 Summary: Fix TestOptionsParser
 Key: HDFS-6872
 URL: https://issues.apache.org/jira/browse/HDFS-6872
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Charles Lamb
Assignee: Charles Lamb


Error Message

expected:<...argetPathExists=true[]}> but was:<...argetPathExists=true[, 
preserveRawXattrs=false]}>

Stacktrace

org.junit.ComparisonFailure: expected:<...argetPathExists=true[]}> but 
was:<...argetPathExists=true[, preserveRawXattrs=false]}>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.tools.TestOptionsParser.testToString(TestOptionsParser.java:361)




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6873) Constants in CommandWithDestination should be static

2014-08-19 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6873:
--

 Summary: Constants in CommandWithDestination should be static
 Key: HDFS-6873
 URL: https://issues.apache.org/jira/browse/HDFS-6873
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Charles Lamb
Assignee: Charles Lamb


FB turned these two up.

SS  Unread field: org.apache.hadoop.fs.shell.CommandWithDestination.RAW; 
should this field be static?

SS  Unread field: 
org.apache.hadoop.fs.shell.CommandWithDestination.RESERVED_RAW; should this 
field be static?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2014-08-19 Thread Gao Zhong Liang (JIRA)
Gao Zhong Liang created HDFS-6874:
-

 Summary: Add GET_BLOCK_LOCATIONS operation to HttpFS
 Key: HDFS-6874
 URL: https://issues.apache.org/jira/browse/HDFS-6874
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Gao Zhong Liang


GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already supported 
in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:

...
 case GETFILEBLOCKLOCATIONS: {
response = Response.status(Response.Status.BAD_REQUEST).build();
break;
  }
 





--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Apache Hadoop 2.5.0 published tarballs are missing some txt files

2014-08-19 Thread Arun Murthy
I suggest we do a 2.5.1 (with potentially other bug fixes) rather than fix
existing tarballs.

thanks,
Arun


On Mon, Aug 18, 2014 at 12:42 PM, Karthik Kambatla 
wrote:

> Hi devs
>
> Tsuyoshi just brought it to my notice that the published tarballs don't
> have LICENSE, NOTICE and README at the top-level. Instead, they are only
> under common, hdfs, etc.
>
> Now that we have already announced the release and the jars/functionality
> doesn't change, I propose we just update the tarballs with ones that
> includes those files? I just untar-ed the published tarballs and copied
> LICENSE, NOTICE and README from under common to the top directory and
> tar-ed them back again.
>
> The updated tarballs are at: http://people.apache.org/~kasha/hadoop-2.5.0/
> . Can someone please verify the signatures?
>
> If you would prefer an alternate action, please suggest.
>
> Thanks
> Karthik
>
> PS: HADOOP-10956 should include the fix for these files also.
>



-- 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Hdfs-trunk #1842

2014-08-19 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-9902. Shell script rewrite (aw)

[brandonli] HDFS-6569. OOB message can't be sent to the client when DataNode 
shuts down for upgrade. Contributed by Brandon Li

[arp] HDFS-6188. An ip whitelist based implementation of 
TrustedChannelResolver. (Contributed by Benoy Antony)

[aw] HADOOP-10873. Fix dead link in Configuration javadoc (Akira AJISAKA via aw)

[aw] HADOOP-10972. Native Libraries Guide contains mis-spelt build line (Peter 
Klavins via aw)

[kasha] MAPREDUCE-6012. DBInputSplit creates invalid ranges on Oracle. (Wei Yan 
via kasha)

[jlowe] MAPREDUCE-6036. TestJobEndNotifier fails intermittently in branch-2. 
Contributed by chang li

[todd] Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975.

[wang] HDFS-6825. Edit log corruption due to delayed block removal. Contributed 
by Yongjun Zhang.

[arp] HADOOP-10973. Native Libraries Guide contains format error. (Contributed 
by Peter Klavins)

[cmccabe] HDFS-6561. org.apache.hadoop.util.DataChecksum should support native 
checksumming (James Thomas via Colin Patrick McCabe)

[zjshen] MAPREDUCE-6024. Shortened the time when Fetcher is stuck in retrying 
before concluding the failure by configuration. Contributed by Yunjiong Zhao.

[jlowe] HADOOP-10059. RPC authentication and authorization metrics overflow to 
negative values on busy clusters. Contributed by Tsuyoshi OZAWA and Akira 
AJISAKA

[eyang] MAPREDUCE-6033. Updated access check for displaying job information 
(Yu Gao via Eric Yang)

[arp] HDFS-6783. Addendum patch to fix test failures. (Contributed by Yi Liu)

--
[...truncated 18728 lines...]
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in 
http://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 15 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-hdfs-nfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.175 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Running org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.799 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.827 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 3, Failures: 

Hadoop-Hdfs-trunk - Build # 1842 - Still Failing

2014-08-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 18921 lines...]
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [  02:23 h]
[INFO] Apache Hadoop HttpFS .. SUCCESS [03:37 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [ 51.705 s]
[INFO] Apache Hadoop HDFS-NFS  SUCCESS [01:31 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.058 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:29 h
[INFO] Finished at: 2014-08-19T18:52:55+00:00
[INFO] Final Memory: 93M/1289M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #1833
Archived 2 artifacts
Archive block size is 32768
Received 90 blocks and 120417446 bytes
Compression is 2.4%
Took 1 min 15 sec
Recording test results
Updating HADOOP-10972
Updating HADOOP-10059
Updating HDFS-6569
Updating HADOOP-10973
Updating HADOOP-10975
Updating HADOOP-10975
Updating HADOOP-10873
Updating MAPREDUCE-6036
Updating HDFS-6783
Updating MAPREDUCE-6033
Updating MAPREDUCE-6024
Updating HDFS-6188
Updating HADOOP-9902
Updating MAPREDUCE-6012
Updating HDFS-6825
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Created] (HDFS-6875) Archival Storage: support migration for a list of specified paths

2014-08-19 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-6875:
---

 Summary: Archival Storage: support migration for a list of 
specified paths
 Key: HDFS-6875
 URL: https://issues.apache.org/jira/browse/HDFS-6875
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6872) Fix TestOptionsParser

2014-08-19 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb resolved HDFS-6872.


   Resolution: Fixed
Fix Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)

Thanks Andrew. Committed to fs-encryption.

> Fix TestOptionsParser
> -
>
> Key: HDFS-6872
> URL: https://issues.apache.org/jira/browse/HDFS-6872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, security
>Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)
>
> Attachments: HDFS-6872.001.patch
>
>
> Error Message
> expected:<...argetPathExists=true[]}> but was:<...argetPathExists=true[, 
> preserveRawXattrs=false]}>
> Stacktrace
> org.junit.ComparisonFailure: expected:<...argetPathExists=true[]}> but 
> was:<...argetPathExists=true[, preserveRawXattrs=false]}>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.tools.TestOptionsParser.testToString(TestOptionsParser.java:361)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6873) Constants in CommandWithDestination should be static

2014-08-19 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb resolved HDFS-6873.


   Resolution: Fixed
Fix Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)

Thanks Andrew. Committed to fs-encryption.

> Constants in CommandWithDestination should be static
> 
>
> Key: HDFS-6873
> URL: https://issues.apache.org/jira/browse/HDFS-6873
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, security
>Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)
>
> Attachments: HDFS-6873.001.patch
>
>
> FB turned these two up.
> SSUnread field: org.apache.hadoop.fs.shell.CommandWithDestination.RAW; 
> should this field be static?
>   
> SSUnread field: 
> org.apache.hadoop.fs.shell.CommandWithDestination.RESERVED_RAW; should this 
> field be static?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6876) Archival Storage: support set/get storage policy in DFSAdmin

2014-08-19 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-6876:
---

 Summary: Archival Storage: support set/get storage policy in 
DFSAdmin
 Key: HDFS-6876
 URL: https://issues.apache.org/jira/browse/HDFS-6876
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


We need to have DFSAdmin commands to manually set/get the storage policy of a 
file/directory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6868) portmap and nfs3 are documented as hadoop commands instead of hdfs

2014-08-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-6868.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

> portmap and nfs3 are documented as hadoop commands instead of hdfs
> --
>
> Key: HDFS-6868
> URL: https://issues.apache.org/jira/browse/HDFS-6868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 3.0.0, 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Brandon Li
> Attachments: HDFS-6868.patch
>
>
> The NFS guide says to use 'hadoop portmap' and 'hadoop nfs3' even though 
> these are deprecated options.   Instead this should say 'hdfs portmap' and 
> 'hdfs nfs3'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6877) Interrupt writes when the volume being written is removed.

2014-08-19 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-6877:
---

 Summary: Interrupt writes when the volume being written is removed.
 Key: HDFS-6877
 URL: https://issues.apache.org/jira/browse/HDFS-6877
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.5.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6878) Change MiniDFSCluster to support StorageType configuration for individual directories

2014-08-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-6878:
-

 Summary: Change MiniDFSCluster to support StorageType 
configuration for individual directories
 Key: HDFS-6878
 URL: https://issues.apache.org/jira/browse/HDFS-6878
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Tsz Wo Nicholas Sze
Assignee: Arpit Agarwal


Currently, MiniDFSCluster only supports a single StorageType configuration for 
all datadnodes, i.e. setting all directories in all datanodes to one 
StorageType.  It should support setting individual StorageType for each 
directory in each datanode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6880) Adding tracing to DataNode data transfer protocol

2014-08-19 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-6880:
--

 Summary: Adding tracing to DataNode data transfer protocol
 Key: HDFS-6880
 URL: https://issues.apache.org/jira/browse/HDFS-6880
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6879) Adding tracing to Hadoop RPC

2014-08-19 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-6879:
--

 Summary: Adding tracing to Hadoop RPC
 Key: HDFS-6879
 URL: https://issues.apache.org/jira/browse/HDFS-6879
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6881) Adding tracing spans to HDFS

2014-08-19 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-6881:
--

 Summary: Adding tracing spans to HDFS
 Key: HDFS-6881
 URL: https://issues.apache.org/jira/browse/HDFS-6881
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6882) Ability to fetch the KMS ACLs for a given key

2014-08-19 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6882:
-

 Summary: Ability to fetch the KMS ACLs for a given key
 Key: HDFS-6882
 URL: https://issues.apache.org/jira/browse/HDFS-6882
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Alejandro Abdelnur


On HDFS-6134, [~sureshms] asked for APIs to be able to compare filesystem 
permissions and KeyProvider permissions to diagnose where they might differ.

We already have APIs in HDFS-6134 to query the EZ of a path and the key for 
each EZ, so the only missing link is a KMS API that allows us to query the ACLs 
for the key.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6883) Multiple Kerberos principals for KMS

2014-08-19 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6883:
-

 Summary: Multiple Kerberos principals for KMS
 Key: HDFS-6883
 URL: https://issues.apache.org/jira/browse/HDFS-6883
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Alejandro Abdelnur


The Key Management Server should support multiple Kerberos principals.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6884) Include the hostname in HTTPFS log filenames

2014-08-19 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6884:
-

 Summary: Include the hostname in HTTPFS log filenames
 Key: HDFS-6884
 URL: https://issues.apache.org/jira/browse/HDFS-6884
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Andrew Wang
Assignee: Alejandro Abdelnur


It'd be good to include the hostname in the httpfs log filenames. Right now we 
have httpfs.log and httpfs-audit.log, it'd be nice to have e.g. 
"httpfs-${hostname}.log".



--
This message was sent by Atlassian JIRA
(v6.2#6252)