[jira] [Updated] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-10-20 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8287:
-
Attachment: HDFS-8287.15.patch

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch, 
> HDFS-8287-HDFS-7285.01.patch, HDFS-8287-HDFS-7285.02.patch, 
> HDFS-8287-HDFS-7285.03.patch, HDFS-8287-HDFS-7285.04.patch, 
> HDFS-8287-HDFS-7285.05.patch, HDFS-8287-HDFS-7285.06.patch, 
> HDFS-8287-HDFS-7285.07.patch, HDFS-8287-HDFS-7285.08.patch, 
> HDFS-8287-HDFS-7285.09.patch, HDFS-8287-HDFS-7285.10.patch, 
> HDFS-8287-HDFS-7285.11.patch, HDFS-8287-HDFS-7285.WIP.patch, 
> HDFS-8287-performance-report.pdf, HDFS-8287.12.patch, HDFS-8287.13.patch, 
> HDFS-8287.14.patch, HDFS-8287.15.patch, h8287_20150911.patch, jstack-dump.txt
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Attachment: HDFS-9241.001.patch

The v1 patch moves the {{HdfsConfiguration}} class from {{hadoop-hdfs}} to 
{{hadoop-hdfs-client}} module so that the current downstream code will not be 
broken if it uses {{HdfsConfiguration.init()}} to forcefully load default HDFS 
resources.

To add deprecated keys in {{HdfsConfiguration}}, some config keys are moved to 
{{hadoop-hdfs-client}} module as well.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch, HDFS-9241.001.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9273:

Status: Patch Available  (was: Open)

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966301#comment-14966301
 ] 

Xiao Chen commented on HDFS-9273:
-

Thank you [~cnauroth] for looking at this. I look forward to the reviews.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-10-20 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966300#comment-14966300
 ] 

Walter Su commented on HDFS-7964:
-

{code}
if (call.connection.responseQueue.size() == 1) {  //Server.java#doRespond(Call)
  processResponse(call.connection.responseQueue, true);
{code}
{{Handlers}} will help {{Responder}} to call {{processResponse}} sometimes.
FSEditLogAsync.run() --> RpcEdit.logSyncNotify(..) --> call.sendResponse() --> 
connection.sendResponse() --> responder.doRespond(call) --> processResponse(..) 
( possibly --> closeConnection(..) )
It could slow down {{FSEditLogAsync}} thread, then [~jingzhao]'s concern #2 
seems valid.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966262#comment-14966262
 ] 

Daniel Templeton commented on HDFS-9274:


Thanks, [~hitliuyi].  Not sure how that slipped past me.  +1 (non-binding)

> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966260#comment-14966260
 ] 

Hadoop QA commented on HDFS-8647:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  26m 32s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |  10m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 34s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 44s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   7m 32s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  68m 38s | Tests failed in hadoop-hdfs. |
| | | 134m  2s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767622/HDFS-8647-009.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0c4af0f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13099/console |


This message was automatically generated.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966250#comment-14966250
 ] 

Hadoop QA commented on HDFS-9266:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 25s | Findbugs (version ) appears to 
be broken on HADOOP-11890. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |  10m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 38s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 10s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  9s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 46s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 31s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  63m  9s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 38s | Tests passed in 
hadoop-hdfs-client. |
| | | 126m 40s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestFSNamesystem |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767718/HDFS-9266-HADOOP-11890.2.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HADOOP-11890 / c84858b |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13098/console |


This message was automatically generated.

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch, 
> HDFS-9266-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9273:

Affects Version/s: 2.7.1

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-20 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966196#comment-14966196
 ] 

J.Andreina commented on HDFS-9070:
--

Thanks [~demongaorui].
Updated patch looks good to me.

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch, 
> HDFS-9070-trunk.04.patch, HDFS-9070-trunk.05.patch, HDFS-9070-trunk.06.patch, 
> HDFS-9070-trunk.07.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-20 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-9274:
-
Status: Patch Available  (was: Open)

> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-20 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-9274:
-
Attachment: HDFS-9274.001.patch

> Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should 
> be consistent
> --
>
> Key: HDFS-9274
> URL: https://issues.apache.org/jira/browse/HDFS-9274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Trivial
> Attachments: HDFS-9274.001.patch
>
>
> Always see following error log while running:
> {noformat}
> ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
> dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
> ms/sec. Assuming default value of 1000
> {noformat}
> {code}
> 
>   dfs.datanode.directoryscan.throttle.limit.ms.per.sec
>   0
> ...
> {code}
> The default value should be 1000 and consistent with 
> DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966165#comment-14966165
 ] 

Nemanja Matkovic commented on HDFS-9266:


For test case failures:
   - TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack -->  Same 
test case failure in same way in Hdfs-trunk build # 2448 ==> This is flaky test
   - TestNodeCount.testNodeCount --> I see same test case failure in same way 
in Hdfs-trunk build # 2448 ==> This is flaky test
   - TestRecoverStripedFile --> We are based after Erasure Encoding got merged 
into trunk, this was failing then, so we have failures here, not regression by 
these changes.
   - TestReplaceDatanodeOnFailure --> I see same test case failure in same way 
in Hdfs-trunk build # 2452 ==> This is flaky test
   - TestWriteReadStripedFile --> We are based after Erasure Encoding got 
merged into trunk, this was failing then, so we have failures here, not 
regression by these changes.
   - TestFileTruncate --> Flaky test, tracked by jira HDFS-9224
   - TestRollingUpgrade --> I see same test suite (not test case) failure in 
same way in Hdfs-trunk build # 2454 ==> Might be that this is flaky test, will 
see with next patch if it passes.
   - TestNameNodeRespectsBindHostKeys --> This one is my bad, forgot to test on 
IPv4 only machine after adding test cases, will upload new patch soon


> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch, 
> HDFS-9266-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemanja Matkovic updated HDFS-9266:
---
Attachment: HDFS-9266-HADOOP-11890.2.patch

Don't validate IPv6 binding if running on IPv4 box.

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch, 
> HDFS-9266-HADOOP-11890.2.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-20 Thread Yi Liu (JIRA)
Yi Liu created HDFS-9274:


 Summary: Default value of 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent
 Key: HDFS-9274
 URL: https://issues.apache.org/jira/browse/HDFS-9274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Trivial


Always see following error log while running:
{noformat}
ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
{noformat}

{code}

  dfs.datanode.directoryscan.throttle.limit.ms.per.sec
  0
...
{code}
The default value should be 1000 and consistent with 
DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-20 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9117:
-
Attachment: HDFS-9117.HDFS-8707.empty.patch

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.empty.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7087) Ability to list /.reserved

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-7087:

Status: Patch Available  (was: Open)

> Ability to list /.reserved
> --
>
> Key: HDFS-7087
> URL: https://issues.apache.org/jira/browse/HDFS-7087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>Assignee: Xiao Chen
> Attachments: HDFS-7087.001.patch, HDFS-7087.002.patch, 
> HDFS-7087.003.patch, HDFS-7087.draft.patch
>
>
> We have two special paths within /.reserved now, /.reserved/.inodes and 
> /.reserved/raw. It seems like we should be able to list /.reserved to see 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966097#comment-14966097
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2456 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2456/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-3059-branch2.patch, HDFS-3059.02.patch, 
> HDFS-3059.03.patch, HDFS-3059.04.patch, HDFS-3059.05.patch, 
> HDFS-3059.06.patch, HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7087) Ability to list /.reserved

2015-10-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966095#comment-14966095
 ] 

Xiao Chen commented on HDFS-7087:
-

Thank you for the additional comments Andrew!
Attached patch003 is rebased to latest trunk and addresses all of your 
comments. 

> Ability to list /.reserved
> --
>
> Key: HDFS-7087
> URL: https://issues.apache.org/jira/browse/HDFS-7087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>Assignee: Xiao Chen
> Attachments: HDFS-7087.001.patch, HDFS-7087.002.patch, 
> HDFS-7087.003.patch, HDFS-7087.draft.patch
>
>
> We have two special paths within /.reserved now, /.reserved/.inodes and 
> /.reserved/raw. It seems like we should be able to list /.reserved to see 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966096#comment-14966096
 ] 

Hudson commented on HDFS-9270:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2456 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2456/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7087) Ability to list /.reserved

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-7087:

Status: Open  (was: Patch Available)

> Ability to list /.reserved
> --
>
> Key: HDFS-7087
> URL: https://issues.apache.org/jira/browse/HDFS-7087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>Assignee: Xiao Chen
> Attachments: HDFS-7087.001.patch, HDFS-7087.002.patch, 
> HDFS-7087.003.patch, HDFS-7087.draft.patch
>
>
> We have two special paths within /.reserved now, /.reserved/.inodes and 
> /.reserved/raw. It seems like we should be able to list /.reserved to see 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7087) Ability to list /.reserved

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-7087:

Attachment: HDFS-7087.003.patch

> Ability to list /.reserved
> --
>
> Key: HDFS-7087
> URL: https://issues.apache.org/jira/browse/HDFS-7087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>Assignee: Xiao Chen
> Attachments: HDFS-7087.001.patch, HDFS-7087.002.patch, 
> HDFS-7087.003.patch, HDFS-7087.draft.patch
>
>
> We have two special paths within /.reserved now, /.reserved/.inodes and 
> /.reserved/raw. It seems like we should be able to list /.reserved to see 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966078#comment-14966078
 ] 

Chris Nauroth commented on HDFS-9273:
-

[~xiaochen], thank you very much.  Nice catch!  The patch makes sense to me.  
I'll give it a closer review tomorrow.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966064#comment-14966064
 ] 

Xiao Chen commented on HDFS-3059:
-

Thank you Andrew!

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-3059-branch2.patch, HDFS-3059.02.patch, 
> HDFS-3059.03.patch, HDFS-3059.04.patch, HDFS-3059.05.patch, 
> HDFS-3059.06.patch, HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9273:

Attachment: HDFS-9273.001.patch

Attached patch 001. Please review and advice, thanks.
The fix is to copy the AclFeature from the temporary root variable to 
{{rootDir}} of FSDirectory.
Added a test case to simulate the scenario of loading root directory's ACL from 
FsImage.

> ACLs on root directory may be lost after NN restart
> ---
>
> Key: HDFS-9273
> URL: https://issues.apache.org/jira/browse/HDFS-9273
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9273.001.patch
>
>
> After restarting namenode, the ACLs on the root directory ("/") may be lost 
> if it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3059:
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.8.0
   Status: Resolved  (was: Patch Available)

LGTM, thanks Xiao, committed to branch-2

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-3059-branch2.patch, HDFS-3059.02.patch, 
> HDFS-3059.03.patch, HDFS-3059.04.patch, HDFS-3059.05.patch, 
> HDFS-3059.06.patch, HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9273:
---

 Summary: ACLs on root directory may be lost after NN restart
 Key: HDFS-9273
 URL: https://issues.apache.org/jira/browse/HDFS-9273
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Xiao Chen
Assignee: Xiao Chen


After restarting namenode, the ACLs on the root directory ("/") may be lost if 
it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-3059:

Attachment: HDFS-3059-branch2.patch

Hi [~andrew.wang], I've attached the patch based on branch-2. Thanks again!

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059-branch2.patch, HDFS-3059.02.patch, 
> HDFS-3059.03.patch, HDFS-3059.04.patch, HDFS-3059.05.patch, 
> HDFS-3059.06.patch, HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9267) TestDiskError should get stored replicas through FsDatasetTestUtils.

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966048#comment-14966048
 ] 

Hadoop QA commented on HDFS-9267:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   8m  2s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 27s | Tests failed in hadoop-hdfs. |
| | |  72m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDataNodeInitStorage |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767693/HDFS-9267.01.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 6c8b6f3 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13093/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13093/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13093/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13093/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13093/console |


This message was automatically generated.

> TestDiskError should get stored replicas through FsDatasetTestUtils.
> 
>
> Key: HDFS-9267
> URL: https://issues.apache.org/jira/browse/HDFS-9267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-9267.00.patch, HDFS-9267.01.patch
>
>
> {{TestDiskError#testReplicationError}} scans local directories to verify 
> blocks and metadata files, which leaks the details of {{FsDataset}} 
> implementation. 
> This JIRA will abstract the "scanning" operation to {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966009#comment-14966009
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2507 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2507/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966002#comment-14966002
 ] 

Hadoop QA commented on HDFS-9266:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 56s | Findbugs (version ) appears to 
be broken on HADOOP-11890. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   8m 15s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 19s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 12s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  8s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 47s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 28s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  68m 33s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 116m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767673/HDFS-9266-HADOOP-11890.1.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HADOOP-11890 / c84858b |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13091/console |


This message was automatically generated.

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9264) Minor cleanup of operations on FsVolumeList#volumes

2015-10-20 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965991#comment-14965991
 ] 

Lei (Eddy) Xu commented on HDFS-9264:
-

Thanks a lot for cleanup the code. It LGTM. +1

> Minor cleanup of operations on FsVolumeList#volumes
> ---
>
> Key: HDFS-9264
> URL: https://issues.apache.org/jira/browse/HDFS-9264
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9264.01.patch, HDFS-9264.02.patch
>
>
> We can use {{CopyOnWriteArrayList}} to simplify the operations on 
> {{FsVolumeList#volumes}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965984#comment-14965984
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #519 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/519/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965983#comment-14965983
 ] 

Hudson commented on HDFS-9270:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #519 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/519/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9267) TestDiskError should get stored replicas through FsDatasetTestUtils.

2015-10-20 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9267:

Attachment: HDFS-9267.01.patch

Rebase to trunk.

> TestDiskError should get stored replicas through FsDatasetTestUtils.
> 
>
> Key: HDFS-9267
> URL: https://issues.apache.org/jira/browse/HDFS-9267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-9267.00.patch, HDFS-9267.01.patch
>
>
> {{TestDiskError#testReplicationError}} scans local directories to verify 
> blocks and metadata files, which leaks the details of {{FsDataset}} 
> implementation. 
> This JIRA will abstract the "scanning" operation to {{FsDatasetTestUtils}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965963#comment-14965963
 ] 

Zhe Zhang commented on HDFS-9229:
-

Thanks for the work Surendra! A few quick comments:

{code}
+  public String getNNDirectorySize() {
+Map storageTypeMap = new HashMap();
{code}
With Java 7 we don't need to specify type here.

{code}
+public long getDirecorySize() {
+  if (!isShared && root != null) {
+return org.apache.commons.io.FileUtils.sizeOfDirectory(root);
{code}
Maybe we should import {{org.apache.commons.io.FileUtils}}?

Another suggestion is to enhance the unit test to actually verify the size 
matches, instead of just verifying it's positive.

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965948#comment-14965948
 ] 

Hudson commented on HDFS-3059:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1294/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965947#comment-14965947
 ] 

Hadoop QA commented on HDFS-9265:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 48s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 25s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767678/HDFS-9265.HDFS-8707.000.patch
 |
| Optional Tests | javac unit |
| git revision | HDFS-8707 / ea310d7 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13092/console |


This message was automatically generated.

> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-20 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965937#comment-14965937
 ] 

Zhe Zhang commented on HDFS-9255:
-

Thanks Walter for the work. The 04 patch looks good overall. Some minors:
# {{Datanode#recoverBlocks}} doesn't need a return value. We can also consider 
replacing the method with a getter for {{blockRecoveryWorker}} and 
{{getBlockRecoveryWorker().recoverBlocks}}.
# The structure of {{BlockRecoveryWorker}} can also be simplified. Following 
the example of {{ErasureCodingWorker}} we can make {{RecoveryTaskContiguous}} 
itself a daemon. But since this is just a refactor JIRA, we can address this 
issue and the previous one in a separate JIRA.
# Since we are explicitly naming {{RecoveryTaskContiguous}}, should also throw 
unsupported exception for striped blocks?
{code}
+for(RecoveringBlock b : blocks) {
+  try {
+logRecoverBlock(who, b);
+RecoveryTaskContiguous task = new RecoveryTaskContiguous(b);
+task.recover();
{code}
# I guess changes in {{PBHelper}}, {{DatanodeManager}}, and {{FSNamesystem}} 
are all misc optimizations not directly related to this refactor? {{PBHelper}} 
and {{DatanodeManager}} changes LGTM. But I haven't fully reviewed 
{{FSNamesystem}}. It looks a decent size optimization by itself. How about do 
it separately?

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch, HDFS-9255.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-20 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9265:
--
Attachment: HDFS-9265.HDFS-8707.000.patch

Simple fix.  As far as I can tell it won't cause any cycles.

Now the block reader will only have one shared_ptr referencing it instead of a 
shared_ptr and unique_ptr fighting over who'd call the destructor first.

> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-20 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9265:
--
Status: Patch Available  (was: Open)

> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9265.HDFS-8707.000.patch
>
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-20 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer reassigned HDFS-9265:
-

Assignee: James Clampffer

> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9265) Use of undefined behavior in remote_block_reader causing deterministic crashes.

2015-10-20 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9265:
--
Description: 
The remote block reader relies on undefined behavior in how it uses 
enable_shared_from_this.

http://en.cppreference.com/w/cpp/memory/enable_shared_from_this

The spec states a shared_ptr to an object inheriting from 
enable_shared_from_this must be live before calling make_shared_from_this.  
Calling make_shared_from_this without an existing shared_ptr is undefined 
behavior and causes deterministic crashes when the code is built with GCC.

example:
class foo : public enable_shared_from_this {/*bar*/};

safe:
auto ptr1 = std::make_shared();
auto ptr2 = foo->make_shared_from_this();

broken:
foo *ptr = new foo();
auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr

In order to fix the input stream should call std::make_shared and hang onto a 
shared_ptr to the block reader.  The block reader will then be free to call 
make_shared_from this as much as it wants without issue.  


  was:
The remote block reader relies on undefined behavior in how it uses 
enable_shared_from_this.

http://en.cppreference.com/w/cpp/memory/enable_shared_from_this

The spec states a shared_ptr to an object inheriting from 
enable_shared_from_this must be live before calling make_shared_from_this.  
Calling make_shared_from_this without an existing shared_ptr is undefined 
behavior and causes deterministic crashes when the code is built with GCC.

example:
class foo : public enable_shared_from_this {/*bar*/};

safe:
auto ptr1 = std::make_shared();
auto ptr2 = foo->make_shared_from_this();

broken:
foo *ptr = new foo();
auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr

In order to fix the input stream should call std::make_shared and hang onto a 
shared_ptr rather than unique_ptr to the block reader.  The block reader will 
then be free to call make_shared_from this as much as it wants without issue.  

I think this will fix some double deletes and pure virtual method call 
exceptions as a bonus.  Both the shared_ptr and unique pointer think they own 
the reader, whichever calls delete on the reader second will either segfault if 
the memory has been reused or end up calling a pure virtual dtor which throws 
(because after the first dtor executes vptr points to the base class vtable and 
gcc stubs those in to throw).



> Use of undefined behavior in remote_block_reader causing deterministic 
> crashes.
> ---
>
> Key: HDFS-9265
> URL: https://issues.apache.org/jira/browse/HDFS-9265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Priority: Blocker
>
> The remote block reader relies on undefined behavior in how it uses 
> enable_shared_from_this.
> http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
> The spec states a shared_ptr to an object inheriting from 
> enable_shared_from_this must be live before calling make_shared_from_this.  
> Calling make_shared_from_this without an existing shared_ptr is undefined 
> behavior and causes deterministic crashes when the code is built with GCC.
> example:
> class foo : public enable_shared_from_this {/*bar*/};
> safe:
> auto ptr1 = std::make_shared();
> auto ptr2 = foo->make_shared_from_this();
> broken:
> foo *ptr = new foo();
> auto ptr2 = foo->make_shared_from_this(); //no existing live shared_ptr
> In order to fix the input stream should call std::make_shared and hang onto a 
> shared_ptr to the block reader.  The block reader will then be free to call 
> make_shared_from this as much as it wants without issue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965831#comment-14965831
 ] 

Hudson commented on HDFS-9251:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2455 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2455/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965828#comment-14965828
 ] 

Hadoop QA commented on HDFS-9252:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 19s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  62m 23s | Tests failed in hadoop-hdfs. |
| | | 109m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767632/HDFS-9252.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6381ddc |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13089/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13089/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13089/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13089/console |


This message was automatically generated.

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch, 
> HDFS-9252.02.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965826#comment-14965826
 ] 

James Clampffer commented on HDFS-9272:
---

Thanks for the explanation! I didn't know that was already an issue with the 
current java 'hadoop fs' implementation.

In that case I'll remove this once the integration tests are in place.

> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
> Attachments: HDFS-9272.HDFS-8707.000.patch
>
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965815#comment-14965815
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #574 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/574/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9272:
--
Attachment: HDFS-9272.HDFS-8707.000.patch

Built on top of the HDFS-8766 patch that hasn't landed yet.  If any segfaults 
pop up it's most likely because of HDFS-9265.  A temporary fix is to change 
make_shared_from_this to make_shared in remote_block_reader_impl.h.

The stuff about tPort in the diff is due to due to diffing against a broken 
copy of 8766.  Once that's refactored and/or landed I'll make a new patch and 
add it here.

> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
> Attachments: HDFS-9272.HDFS-8707.000.patch
>
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965805#comment-14965805
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #559 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/559/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemanja Matkovic updated HDFS-9266:
---
Status: Patch Available  (was: Open)

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965797#comment-14965797
 ] 

Allen Wittenauer commented on HDFS-9272:


bq.  Is the issue that if/when the RPC protocol changes older clients wouldn't 
be aware?

Yes, this is exactly the problem.  You want higher levels of client 
compatibility than what Hadoop RPC provides since those are typically outside 
of the control of the teams that run the servers.

bq. I'm going to put up a patch for now just because we don't have anything 
capable of testing libhdfs++ against a real cluster at the moment; Haohui's 
working on fixing that. A couple people are ramping up on this project so in 
the short term I think being able to say cat ran fine under valgrind or asan 
after changes offers some protection against regressions.

I think that's reasonable.  I just wouldn't want it exposed to users given the 
fragility of RPC.  (We already have these problems with 'hadoop fs' in its HDFS 
form.)

> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9266) hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 literals

2015-10-20 Thread Nemanja Matkovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemanja Matkovic updated HDFS-9266:
---
Attachment: HDFS-9266-HADOOP-11890.1.patch

HDFS part of patch from HADOOP-12122.

> hadoop-hdfs - Avoid unsafe split and append on fields that might be IPv6 
> literals
> -
>
> Key: HDFS-9266
> URL: https://issues.apache.org/jira/browse/HDFS-9266
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Nemanja Matkovic
>Assignee: Nemanja Matkovic
>  Labels: ipv6
> Attachments: HDFS-9266-HADOOP-11890.1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965791#comment-14965791
 ] 

Hudson commented on HDFS-3059:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8672 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8672/])
HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen. (wang: 
rev 6c8b6f3646b31a3e028704bc7fd78bf319f89f0a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965785#comment-14965785
 ] 

Hudson commented on HDFS-9270:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2506 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2506/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965784#comment-14965784
 ] 

James Clampffer commented on HDFS-9272:
---

Hi Allen,

Thanks for the input.  Do you have a JIRA or reference that I can take a look 
at to better understand the compatibility problems?  Is the issue that if/when 
the RPC protocol changes older clients wouldn't be aware?

I'm going to put up a patch for now just because we don't have anything capable 
of testing libhdfs++ against a real cluster at the moment; Haohui's working on 
fixing that.  A couple people are ramping up on this project so in the short 
term I think being able to say cat ran fine under valgrind or asan after 
changes offers some protection against regressions.

-James

> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965773#comment-14965773
 ] 

Hudson commented on HDFS-9251:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #518 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/518/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9268) fuse_dfs chown crashes when uid is passed as -1

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965724#comment-14965724
 ] 

Hadoop QA commented on HDFS-9268:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 42s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 49s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   1m 13s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 51s | Tests passed in 
hadoop-hdfs-native-client. |
| | |  19m 35s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767653/HDFS-9268.002.patch |
| Optional Tests | javac unit |
| git revision | trunk / 01b103f |
| hadoop-hdfs-native-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13090/artifact/patchprocess/testrun_hadoop-hdfs-native-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13090/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13090/console |


This message was automatically generated.

> fuse_dfs chown crashes when uid is passed as -1
> ---
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-9268.001.patch, HDFS-9268.002.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () 

[jira] [Updated] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-3059:
--
Target Version/s: 2.8.0  (was: 3.0.0)
   Fix Version/s: 3.0.0

Committed to trunk, branch-2 wasn't clean. Xiao, mind preparing a branch-2 
patch as well?

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0
>
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965714#comment-14965714
 ] 

Andrew Wang commented on HDFS-3059:
---

LGTM +1 will commit shortly, I ran the failed test locally okay.

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, 
> HDFS-3059.07.patch, HDFS-3059.08.patch, HDFS-3059.patch, HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-20 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8647:
--
Status: Open  (was: Patch Available)

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-20 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8647:
--
Status: Patch Available  (was: Open)

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch, HDFS-8647-007.patch, 
> HDFS-8647-008.patch, HDFS-8647-009.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965683#comment-14965683
 ] 

Hudson commented on HDFS-9270:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #573 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/573/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-20 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9207:
-
Attachment: HDFS-9207.HDFS-8707.empty.patch

> Move the implementation to the hdfs-native-client module
> 
>
> Key: HDFS-9207
> URL: https://issues.apache.org/jira/browse/HDFS-9207
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9207.000.patch, HDFS-9207.HDFS-8707.empty.patch
>
>
> The implementation of libhdfspp should be moved to the new hdfs-native-client 
> module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965643#comment-14965643
 ] 

Hadoop QA commented on HDFS-9252:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m  8s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 20s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  64m 54s | Tests failed in hadoop-hdfs. |
| | | 111m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767632/HDFS-9252.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6381ddc |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13087/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13087/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13087/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13087/console |


This message was automatically generated.

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch, 
> HDFS-9252.02.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965630#comment-14965630
 ] 

Hadoop QA commented on HDFS-9117:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 34s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 32s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767644/HDFS-9117.HDFS-8707.003.patch
 |
| Optional Tests | javac unit javadoc |
| git revision | HDFS-8707 / ea310d7 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13088/console |


This message was automatically generated.

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9268) fuse_dfs chown crashes when uid is passed as -1

2015-10-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965632#comment-14965632
 ] 

Colin Patrick McCabe commented on HDFS-9268:


I posted a patch which fixes the root of the problem, I think.  We should be 
using {{fuseConnectAsThreadUid}} instead of {{fuseConnect}}.

> fuse_dfs chown crashes when uid is passed as -1
> ---
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-9268.001.patch, HDFS-9268.002.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
> #11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
> #12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
> #13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
> #14 0x003b82ee894d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9268) fuse_dfs chown crashes when uid is passed as -1

2015-10-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9268:
---
Assignee: Colin Patrick McCabe  (was: Wei-Chiu Chuang)
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> fuse_dfs chown crashes when uid is passed as -1
> ---
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-9268.001.patch, HDFS-9268.002.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
> #11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
> #12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
> #13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
> #14 0x003b82ee894d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9268) fuse_dfs chown crashes when uid is passed as -1

2015-10-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9268:
---
Attachment: HDFS-9268.002.patch

> fuse_dfs chown crashes when uid is passed as -1
> ---
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9268.001.patch, HDFS-9268.002.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
> #11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
> #12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
> #13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
> #14 0x003b82ee894d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965629#comment-14965629
 ] 

Hudson commented on HDFS-9251:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1293/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9268) fuse_dfs chown crashes when uid is passed as -1

2015-10-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9268:
---
Summary: fuse_dfs chown crashes when uid is passed as -1  (was: JVM crashes 
when attempting to update a file in fuse file system using vim)

> fuse_dfs chown crashes when uid is passed as -1
> ---
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9268.001.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
> #11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
> #12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
> #13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
> #14 0x003b82ee894d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965628#comment-14965628
 ] 

Hudson commented on HDFS-9270:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1293 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1293/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965593#comment-14965593
 ] 

Steve Loughran commented on HDFS-9241:
--

Netty? client side? I thought changes in ZK's dependencies there was what was 
breaking hadoop-hdfs builds on bigtop (HADOOP-12415). hdfs-client shouldn't be 
needing netty. There's no jersey, none of its bits, and you could probably cull 
curator. 

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-10-20 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9117:
-
Attachment: HDFS-9117.HDFS-8707.003.patch

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965550#comment-14965550
 ] 

Colin Patrick McCabe commented on HDFS-9241:


Protobuf is used in RPCv9.  The client must speak that.  Guava is used 
extensively in the client as well.  You will need commons-logging, and all the 
other logging stuff too.  commons-codec and commons-io will be needed to 
decompress data.  Those are just the ones I can think of off the top of my head.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-20 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965545#comment-14965545
 ] 

Josh Elser commented on HDFS-9226:
--

bq. I must confess I don't understand how HDFS-8953 introduced this. 
DataNodeTestUtils already had a prior dependency on Mockito.

I'd have to go back and look at the commit, but I think it was just 
MiniDFSCluster calling a method in DataNodeTestUtils which it didn't 
previously. Once that import was added, DataNodeTestUtils got loaded due to 
MiniDFSCluster's import, and then failed because of that Mockito import. Would 
have to dbl check to be certain :)

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun

[jira] [Commented] (HDFS-9198) Coalesce IBR processing in the NN

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965531#comment-14965531
 ] 

Hadoop QA commented on HDFS-9198:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 31s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   9m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 33s | The applied patch generated  6 
new checkstyle issues (total was 414, now 415). |
| {color:green}+1{color} | whitespace |   0m  8s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 41s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 34s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  66m 27s | Tests failed in hadoop-hdfs. |
| | | 118m 30s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.loadGenerator.TestLoadGenerator |
|   | hadoop.fs.TestFcHdfsCreateMkdir |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
|   | hadoop.fs.permission.TestStickyBit |
|   | hadoop.hdfs.web.TestWebHDFS |
| Timed out tests | org.apache.hadoop.fs.TestEnhancedByteBufferAccess |
|   | org.apache.hadoop.fs.TestUrlStreamHandlerFactory |
|   | org.apache.hadoop.fs.TestHDFSFileContextMainOperations |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767620/HDFS-9198-trunk.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9cb5d35 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13083/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13083/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13083/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13083/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13083/console |


This message was automatically generated.

> Coalesce IBR processing in the NN
> -
>
> Key: HDFS-9198
> URL: https://issues.apache.org/jira/browse/HDFS-9198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-9198-branch2.patch, HDFS-9198-trunk.patch, 
> HDFS-9198-trunk.patch, HDFS-9198-trunk.patch
>
>
> IBRs from thousands of DNs under load will degrade NN performance due to 
> excessive write-lock contention from multiple IPC handler threads.  The IBR 
> processing is quick, so the lock contention may be reduced by coalescing 
> multiple IBRs into a single write-lock transaction.  The handlers will also 
> be freed up faster for other operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965511#comment-14965511
 ] 

Hudson commented on HDFS-9270:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #558 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/558/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965507#comment-14965507
 ] 

Hudson commented on HDFS-9251:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2505 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2505/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965501#comment-14965501
 ] 

Hudson commented on HDFS-9270:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8671 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8671/])
HDFS-9270. TestShortCircuitLocalRead should not leave socket after unit 
(cmccabe: rev 6381ddc096699d680233db3b9efff9321528eedc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracingShortCircuitLocalRead.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java


> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-20 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965495#comment-14965495
 ] 

Arpit Agarwal commented on HDFS-9226:
-

Hi [~elserj], thanks for taking this up. I must confess I don't understand how 
HDFS-8953 introduced this. DataNodeTestUtils already had a prior dependency on 
Mockito.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.inv

[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-10-20 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965491#comment-14965491
 ] 

James Clampffer commented on HDFS-9144:
---

This looks like a solid plan to me.

> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9270:
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim reassigned HDFS-9077:


Assignee: HeeSoo Kim  (was: Heesoo Kim)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-9077:
---
Status: Patch Available  (was: Open)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Heesoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heesoo Kim reassigned HDFS-9077:


Assignee: Heesoo Kim

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Heesoo Kim
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Heesoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heesoo Kim updated HDFS-9077:
-
Assignee: (was: HeeSoo Kim)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Heesoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heesoo Kim reassigned HDFS-9077:


Assignee: HeeSoo Kim  (was: Heesoo Kim)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9077 stopped by Allen Wittenauer.
--
> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HDFS-9077:
--

Assignee: Allen Wittenauer  (was: HeeSoo Kim)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965467#comment-14965467
 ] 

Hudson commented on HDFS-9251:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #572 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/572/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9077) webhdfs client requires SPNEGO to do renew

2015-10-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-9077:
---
Assignee: Heesoo Kim  (was: Allen Wittenauer)

> webhdfs client requires SPNEGO to do renew
> --
>
> Key: HDFS-9077
> URL: https://issues.apache.org/jira/browse/HDFS-9077
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Heesoo Kim
> Attachments: HDFS-9077.001.patch, HDFS-9077.patch
>
>
> Simple bug.
> webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
> the same token.  This forces a SPNEGO (or other auth) instead of just 
> renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9268) JVM crashes when attempting to update a file in fuse file system using vim

2015-10-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965466#comment-14965466
 ] 

Colin Patrick McCabe commented on HDFS-9268:


Hi [~jojochuang], good job debugging this.  However, I think you may have 
misinterpreted the man page.  Your change makes it so that nothing is changed 
if either uid or gid is -1.  But in fact, only the ID which is -1 should be 
left unchanged.

> JVM crashes when attempting to update a file in fuse file system using vim
> --
>
> Key: HDFS-9268
> URL: https://issues.apache.org/jira/browse/HDFS-9268
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9268.001.patch
>
>
> JVM crashes when users attempt to use vi to update a file on fuse file system 
> with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
> generate the bug, but the same bug is reproducible in trunk)
> The root cause is a segfault in a dfs-fuse method
> To reproduce it do as follows:
> mkdir /mnt/fuse
> chmod 777 /mnt/fuse
> ulimit -c unlimited# to enable coredump
> hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
> touch /mnt/fuse/y
> chmod 600 /mnt/fuse/y
> vim /mnt/fuse/y
> (in vim, :w to save the file)
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
> #
> # Core dump written. Default location: /home/weichiu/core or core.26606
> #
> # An error report file with more information is saved as:
> # /home/weichiu/hs_err_pid26606.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> /usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core 
> dumped) env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@
> ===
> The coredump shows the segfault comes from 
> (gdb) bt
> #0  0x003b82e328e5 in raise () from /lib64/libc.so.6
> #1  0x003b82e340c5 in abort () from /lib64/libc.so.6
> #2  0x7f66fc924d75 in os::abort(bool) () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
> /etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
> #5  
> #6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
> #7  0x004039a0 in hdfsConnTree_RB_FIND ()
> #8  0x00403e8f in fuseConnect ()
> #9  0x004046db in dfs_chown ()
> #10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
> #11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
> #12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
> #13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
> #14 0x003b82ee894d in clone () from /lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-10-20 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9252:

Attachment: HDFS-9252.02.patch

Update the patch to resolve rebase conflicts with {{trunk}}.

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch, 
> HDFS-9252.02.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965464#comment-14965464
 ] 

Allen Wittenauer commented on HDFS-9272:


I don't think this is the correct approach.  For interactive, client-side 
operations, you really want WebHDFS and not RPC due to the compatibility 
problems.

> Implement a unix-like cat utility
> -
>
> Key: HDFS-9272
> URL: https://issues.apache.org/jira/browse/HDFS-9272
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Minor
>
> Implement the basic functionality of "cat" and have it build as a separate 
> executable.
> 2 Reasons for this:
> We don't have any real integration tests at the moment so something simple to 
> verify that the library actually works against a real cluster is useful.
> Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
> enough of them it will be simple to make a C++ implementation of the hadoop 
> fs command line interface that doesn't take the latency hit of spinning up a 
> JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965462#comment-14965462
 ] 

Colin Patrick McCabe commented on HDFS-9270:


+1.  Thanks, [~iwasakims].

> TestShortCircuitLocalRead should not leave socket after unit test
> -
>
> Key: HDFS-9270
> URL: https://issues.apache.org/jira/browse/HDFS-9270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9270.001.patch
>
>
> Unix domain sockets created by TestShortCircuitLocalRead and 
> TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965455#comment-14965455
 ] 

Hadoop QA commented on HDFS-9252:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767399/HDFS-9252.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 71e533a |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13086/console |


This message was automatically generated.

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965443#comment-14965443
 ] 

Hadoop QA commented on HDFS-8766:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   7m 34s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   2m  3s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767627/HDFS-8766.HDFS-8707.007.patch
 |
| Optional Tests | javac unit |
| git revision | HDFS-8707 / ea310d7 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13085/console |


This message was automatically generated.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch, HDFS-8766.HDFS-8707.006.patch, 
> HDFS-8766.HDFS-8707.007.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9251) Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly creating Files in tests code.

2015-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965440#comment-14965440
 ] 

Hudson commented on HDFS-9251:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #557 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/557/])
HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid (lei: rev 
71e533a153cbe547c99d2bc18c4cd8b7da9b00b7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java


> Refactor TestWriteToReplica and TestFsDatasetImpl to avoid explicitly 
> creating Files in tests code.
> ---
>
> Key: HDFS-9251
> URL: https://issues.apache.org/jira/browse/HDFS-9251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9251.00.patch, HDFS-9251.01.patch, 
> HDFS-9251.02.patch
>
>
> In {{TestWriteToReplica}} and {{TestFsDatasetImpl}}, tests directly creates 
> block and metadata files:
> {code}
> replicaInfo.getBlockFile().createNewFile();
> replicaInfo.getMetaFile().createNewFile();
> {code}
> It leaks the implementation details of {{FsDatasetImpl}}. This JIRA proposes 
> to use {{FsDatasetImplTestUtils}} (HDFS-9188) to create replicas. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9272:
-

 Summary: Implement a unix-like cat utility
 Key: HDFS-9272
 URL: https://issues.apache.org/jira/browse/HDFS-9272
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Minor


Implement the basic functionality of "cat" and have it build as a separate 
executable.

2 Reasons for this:
We don't have any real integration tests at the moment so something simple to 
verify that the library actually works against a real cluster is useful.

Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
enough of them it will be simple to make a C++ implementation of the hadoop fs 
command line interface that doesn't take the latency hit of spinning up a JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14965419#comment-14965419
 ] 

Hadoop QA commented on HDFS-7964:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 50s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 11 new or modified test files. |
| {color:green}+1{color} | javac |  10m 16s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m  2s | The applied patch generated  6 
new checkstyle issues (total was 1191, now 1162). |
| {color:green}+1{color} | whitespace |   0m 21s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 29s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  54m 38s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 16s | Tests failed in bkjournal. |
| | | 110m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestParallelUnixDomainRead |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
| Failed build | bkjournal |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767604/HDFS-7964.patch |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | trunk / 9cb5d35 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13082/console |


This message was automatically generated.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >