[jira] [Updated] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2014-11-01 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7320:
---
Attachment: HDFS-7320.1.patch

dependencies goal of maven-project-info-reports-plugin in site phase results in 
different maven-theme.css. I do not understand exact reason yet but using same 
setting to other modules fixes the problem.

> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2014-11-01 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7320:
---
Status: Patch Available  (was: Open)

> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2014-11-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193025#comment-14193025
 ] 

Masatake Iwasaki commented on HDFS-7320:


You can see the appearance of 
http://hadoop.apache.org/docs/r2.5.1/hadoop-hdfs-httpfs/index.html is different 
from other pages.

> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7320) The appearance of hadoop-hdfs-httpfs site docs is inconsistent

2014-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193037#comment-14193037
 ] 

Hadoop QA commented on HDFS-7320:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678673/HDFS-7320.1.patch
  against trunk revision ed63b11.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8622//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8622//console

This message is automatically generated.

> The appearance of hadoop-hdfs-httpfs site docs is inconsistent 
> ---
>
> Key: HDFS-7320
> URL: https://issues.apache.org/jira/browse/HDFS-7320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-7320.1.patch
>
>
> The docs of hadoop-hdfs-httpfs use different maven-base.css and 
> maven-theme.css from other modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7309) XMLUtils.mangleXmlString doesn't seem to handle less than sign

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193101#comment-14193101
 ] 

Hudson commented on HDFS-7309:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #730 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/730/])
HDFS-7309. XMLUtils.mangleXmlString doesn't seem to handle less than sign. 
(Colin Patrick McCabe via raviprak) (raviprak: rev 
c7f81dad30c391822eed7273278cf5885fa59264)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/XmlImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/XMLUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java


> XMLUtils.mangleXmlString doesn't seem to handle less than sign
> --
>
> Key: HDFS-7309
> URL: https://issues.apache.org/jira/browse/HDFS-7309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Ravi Prakash
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HDFS-7309.001.patch, HDFS-7309.002.patch, HDFS-7309.patch
>
>
> My expectation was that "" + XMLUtils.mangleXmlString(
>   "Containing" would be a string 
> acceptable to a SAX parser. However this was not true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7319) Remove dead link to HFTP documentation from index.xml

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193100#comment-14193100
 ] 

Hudson commented on HDFS-7319:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #730 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/730/])
HDFS-7319. Remove dead link to HFTP documentation from index.xml. Contributed 
by Masatake Iwasaki. (wheat9: rev 80bb7d47941d9fb0b15d5d000a9a090c07b8aa61)
* hadoop-project/src/site/site.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove dead link to HFTP documentation from index.xml
> -
>
> Key: HDFS-7319
> URL: https://issues.apache.org/jira/browse/HDFS-7319
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7319.1.patch
>
>
> There is dead link to deprecated HFTP doc in doc index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6917) Add an hdfs debug command to validate blocks, call recoverlease, etc.

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193102#comment-14193102
 ] 

Hudson commented on HDFS-6917:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #730 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/730/])
HDFS-6917. Add an hdfs debug command to validate blocks, call recoverlease, 
etc. (cmccabe) (cmccabe: rev 7b026c50f1be399987d23e06b4ecfbffc51dc7b5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs


> Add an hdfs debug command to validate blocks, call recoverlease, etc.
> -
>
> Key: HDFS-6917
> URL: https://issues.apache.org/jira/browse/HDFS-6917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-6917.001.patch, HDFS-6917.002.patch, 
> HDFS-6917.003.patch, HDFS-6917.004.patch
>
>
> HDFS should have a debug command which could validate HDFS block files, call 
> recoverLease, and have some other functionality.  These commands would be 
> purely for debugging and would appear under a separate command hierarchy 
> inside the hdfs command.  There would be no guarantee of API stability for 
> these commands and the debug submenu would not be listed by just typing the 
> "hdfs" command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7315) DFSTestUtil.readFileBuffer opens extra FSDataInputStream

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193103#comment-14193103
 ] 

Hudson commented on HDFS-7315:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #730 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/730/])
HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream. 
Contributed by Plamen Jeliazkov. (wheat9: rev 
3f030c04e8ce7bdda8471ddb2d37b25b4686b121)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DFSTestUtil.readFileBuffer opens extra FSDataInputStream
> 
>
> Key: HDFS-7315
> URL: https://issues.apache.org/jira/browse/HDFS-7315
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7315.patch
>
>
> DFSTestUtil.readFileBuffer() calls FileSystem.open() twice.
> Once just under the try statement, and once inside the IOUtils.copyBytes() 
> call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7315) DFSTestUtil.readFileBuffer opens extra FSDataInputStream

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193189#comment-14193189
 ] 

Hudson commented on HDFS-7315:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1919 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1919/])
HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream. 
Contributed by Plamen Jeliazkov. (wheat9: rev 
3f030c04e8ce7bdda8471ddb2d37b25b4686b121)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


> DFSTestUtil.readFileBuffer opens extra FSDataInputStream
> 
>
> Key: HDFS-7315
> URL: https://issues.apache.org/jira/browse/HDFS-7315
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7315.patch
>
>
> DFSTestUtil.readFileBuffer() calls FileSystem.open() twice.
> Once just under the try statement, and once inside the IOUtils.copyBytes() 
> call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7309) XMLUtils.mangleXmlString doesn't seem to handle less than sign

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193187#comment-14193187
 ] 

Hudson commented on HDFS-7309:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1919 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1919/])
HDFS-7309. XMLUtils.mangleXmlString doesn't seem to handle less than sign. 
(Colin Patrick McCabe via raviprak) (raviprak: rev 
c7f81dad30c391822eed7273278cf5885fa59264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/XmlImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/XMLUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java


> XMLUtils.mangleXmlString doesn't seem to handle less than sign
> --
>
> Key: HDFS-7309
> URL: https://issues.apache.org/jira/browse/HDFS-7309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Ravi Prakash
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HDFS-7309.001.patch, HDFS-7309.002.patch, HDFS-7309.patch
>
>
> My expectation was that "" + XMLUtils.mangleXmlString(
>   "Containing" would be a string 
> acceptable to a SAX parser. However this was not true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6917) Add an hdfs debug command to validate blocks, call recoverlease, etc.

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193188#comment-14193188
 ] 

Hudson commented on HDFS-6917:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1919 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1919/])
HDFS-6917. Add an hdfs debug command to validate blocks, call recoverlease, 
etc. (cmccabe) (cmccabe: rev 7b026c50f1be399987d23e06b4ecfbffc51dc7b5)
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add an hdfs debug command to validate blocks, call recoverlease, etc.
> -
>
> Key: HDFS-6917
> URL: https://issues.apache.org/jira/browse/HDFS-6917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-6917.001.patch, HDFS-6917.002.patch, 
> HDFS-6917.003.patch, HDFS-6917.004.patch
>
>
> HDFS should have a debug command which could validate HDFS block files, call 
> recoverLease, and have some other functionality.  These commands would be 
> purely for debugging and would appear under a separate command hierarchy 
> inside the hdfs command.  There would be no guarantee of API stability for 
> these commands and the debug submenu would not be listed by just typing the 
> "hdfs" command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7319) Remove dead link to HFTP documentation from index.xml

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193186#comment-14193186
 ] 

Hudson commented on HDFS-7319:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1919 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1919/])
HDFS-7319. Remove dead link to HFTP documentation from index.xml. Contributed 
by Masatake Iwasaki. (wheat9: rev 80bb7d47941d9fb0b15d5d000a9a090c07b8aa61)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml


> Remove dead link to HFTP documentation from index.xml
> -
>
> Key: HDFS-7319
> URL: https://issues.apache.org/jira/browse/HDFS-7319
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7319.1.patch
>
>
> There is dead link to deprecated HFTP doc in doc index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7309) XMLUtils.mangleXmlString doesn't seem to handle less than sign

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193219#comment-14193219
 ] 

Hudson commented on HDFS-7309:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1944 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1944/])
HDFS-7309. XMLUtils.mangleXmlString doesn't seem to handle less than sign. 
(Colin Patrick McCabe via raviprak) (raviprak: rev 
c7f81dad30c391822eed7273278cf5885fa59264)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/XmlImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestXMLUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/XMLUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java


> XMLUtils.mangleXmlString doesn't seem to handle less than sign
> --
>
> Key: HDFS-7309
> URL: https://issues.apache.org/jira/browse/HDFS-7309
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Ravi Prakash
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HDFS-7309.001.patch, HDFS-7309.002.patch, HDFS-7309.patch
>
>
> My expectation was that "" + XMLUtils.mangleXmlString(
>   "Containing" would be a string 
> acceptable to a SAX parser. However this was not true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7315) DFSTestUtil.readFileBuffer opens extra FSDataInputStream

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193221#comment-14193221
 ] 

Hudson commented on HDFS-7315:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1944 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1944/])
HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream. 
Contributed by Plamen Jeliazkov. (wheat9: rev 
3f030c04e8ce7bdda8471ddb2d37b25b4686b121)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


> DFSTestUtil.readFileBuffer opens extra FSDataInputStream
> 
>
> Key: HDFS-7315
> URL: https://issues.apache.org/jira/browse/HDFS-7315
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-7315.patch
>
>
> DFSTestUtil.readFileBuffer() calls FileSystem.open() twice.
> Once just under the try statement, and once inside the IOUtils.copyBytes() 
> call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6917) Add an hdfs debug command to validate blocks, call recoverlease, etc.

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193220#comment-14193220
 ] 

Hudson commented on HDFS-6917:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1944 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1944/])
HDFS-6917. Add an hdfs debug command to validate blocks, call recoverlease, 
etc. (cmccabe) (cmccabe: rev 7b026c50f1be399987d23e06b4ecfbffc51dc7b5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDebugAdmin.java


> Add an hdfs debug command to validate blocks, call recoverlease, etc.
> -
>
> Key: HDFS-6917
> URL: https://issues.apache.org/jira/browse/HDFS-6917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-6917.001.patch, HDFS-6917.002.patch, 
> HDFS-6917.003.patch, HDFS-6917.004.patch
>
>
> HDFS should have a debug command which could validate HDFS block files, call 
> recoverLease, and have some other functionality.  These commands would be 
> purely for debugging and would appear under a separate command hierarchy 
> inside the hdfs command.  There would be no guarantee of API stability for 
> these commands and the debug submenu would not be listed by just typing the 
> "hdfs" command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7319) Remove dead link to HFTP documentation from index.xml

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193218#comment-14193218
 ] 

Hudson commented on HDFS-7319:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1944 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1944/])
HDFS-7319. Remove dead link to HFTP documentation from index.xml. Contributed 
by Masatake Iwasaki. (wheat9: rev 80bb7d47941d9fb0b15d5d000a9a090c07b8aa61)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-project/src/site/site.xml


> Remove dead link to HFTP documentation from index.xml
> -
>
> Key: HDFS-7319
> URL: https://issues.apache.org/jira/browse/HDFS-7319
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-7319.1.patch
>
>
> There is dead link to deprecated HFTP doc in doc index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7289) TestDFSUpgradeWithHA sometimes fails in trunk

2014-11-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7289:
-
Labels: ha  (was: )

> TestDFSUpgradeWithHA sometimes fails in trunk
> -
>
> Key: HDFS-7289
> URL: https://issues.apache.org/jira/browse/HDFS-7289
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: ha
>
> From trunk build #1912:
> {code}
> REGRESSION:  
> org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeFromSecondNameNodeWithJournalNodes
> Error Message:
> java.lang.RuntimeException: java.net.SocketTimeoutException: Read timed out
> Stack Trace:
> java.io.IOException: java.lang.RuntimeException: 
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:698)
> at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:641)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1218)
> at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379)
> at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:410)
> at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:395)
> at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.downloadImageToStorage(TransferFsImage.java:114)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.doRun(BootstrapStandby.java:213)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.access$000(BootstrapStandby.java:69)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:107)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby$1.run(BootstrapStandby.java:103)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:103)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:315)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testFinalizeFromSecondNameNodeWithJournalNodes(TestDFSUpgradeWithHA.java:493)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7276) Limit the number of byte arrays used by DFSOutputStream

2014-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193332#comment-14193332
 ] 

Hudson commented on HDFS-7276:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6421 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6421/])
HDFS-7276. Limit the number of byte arrays used by DFSOutputStream and provide 
a mechanism for recycling arrays. (szetszwo: rev 
36ccf097a95eae0761de7b657752e4808a86c094)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ByteArrayManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Limit the number of byte arrays used by DFSOutputStream
> ---
>
> Key: HDFS-7276
> URL: https://issues.apache.org/jira/browse/HDFS-7276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h7276_20141021.patch, h7276_20141022.patch, 
> h7276_20141023.patch, h7276_20141024.patch, h7276_20141027.patch, 
> h7276_20141027b.patch, h7276_20141028.patch, h7276_20141029.patch, 
> h7276_20141029b.patch, h7276_20141030.patch, h7276_20141031.patch
>
>
> When there are a lot of DFSOutputStream's writing concurrently, the number of 
> outstanding packets could be large.  The byte arrays created by those packets 
> could occupy a lot of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7276) Limit the number of byte arrays used by DFSOutputStream

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7276:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Jing and Colin for reviewing the patches.

I have committed this.

> Limit the number of byte arrays used by DFSOutputStream
> ---
>
> Key: HDFS-7276
> URL: https://issues.apache.org/jira/browse/HDFS-7276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.6.0
>
> Attachments: h7276_20141021.patch, h7276_20141022.patch, 
> h7276_20141023.patch, h7276_20141024.patch, h7276_20141027.patch, 
> h7276_20141027b.patch, h7276_20141028.patch, h7276_20141029.patch, 
> h7276_20141029b.patch, h7276_20141030.patch, h7276_20141031.patch
>
>
> When there are a lot of DFSOutputStream's writing concurrently, the number of 
> outstanding packets could be large.  The byte arrays created by those packets 
> could occupy a lot of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2014-11-01 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193353#comment-14193353
 ] 

Haohui Mai commented on HDFS-6481:
--

Argee with [~kihwal]. We need to figure out the root cause.

> DatanodeManager#getDatanodeStorageInfos() should check the length of 
> storageIDs
> ---
>
> Key: HDFS-6481
> URL: https://issues.apache.org/jira/browse/HDFS-6481
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hdfs-6481-v1.txt
>
>
> Ian Brooks reported the following stack trace:
> {code}
> 2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
> /user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
>  block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
> hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> at org.apache.hadoop.ipc.Client.call(Client.java:1347)
> at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
> syncer encountered error, will retry. txid=211
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorage

[jira] [Created] (HDFS-7321) Add a mechanism to clean up the free queue in ByteArrayManager

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7321:
-

 Summary: Add a mechanism to clean up the free queue in 
ByteArrayManager
 Key: HDFS-7321
 URL: https://issues.apache.org/jira/browse/HDFS-7321
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


ByteArrayManager is designed to limit the number of byte arrays allocated.  It 
also provide a mechanism for recycling arrays.  It uses a free queue to store 
the recycled arrays internally.  When the free queue size is large and there is 
no array allocation for a certain time period, it should gradually release the 
arrays for reducing the memory usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7276) Limit the number of byte arrays used by DFSOutputStream

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193444#comment-14193444
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7276:
---

> Do we want to have a mechanism to decrease the memory usage held by the 
> FixedLengthManager#freeQueue after the peak passes? ...

Filed HDFS-7321.

> Limit the number of byte arrays used by DFSOutputStream
> ---
>
> Key: HDFS-7276
> URL: https://issues.apache.org/jira/browse/HDFS-7276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.6.0
>
> Attachments: h7276_20141021.patch, h7276_20141022.patch, 
> h7276_20141023.patch, h7276_20141024.patch, h7276_20141027.patch, 
> h7276_20141027b.patch, h7276_20141028.patch, h7276_20141029.patch, 
> h7276_20141029b.patch, h7276_20141030.patch, h7276_20141031.patch
>
>
> When there are a lot of DFSOutputStream's writing concurrently, the number of 
> outstanding packets could be large.  The byte arrays created by those packets 
> could occupy a lot of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7322) deprecate sbin/*.sh

2014-11-01 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7322:
--

 Summary: deprecate sbin/*.sh
 Key: HDFS-7322
 URL: https://issues.apache.org/jira/browse/HDFS-7322
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer
 Fix For: 3.0.0


The HDFS-related sbin commands (except for \*-dfs.sh) should be marked as 
deprecated in trunk so that they may be removed from a future release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6917) Add an hdfs debug command to validate blocks, call recoverlease, etc.

2014-11-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193529#comment-14193529
 ] 

Suresh Srinivas commented on HDFS-6917:
---

[~cmccabe], these commands need documentation changes as well, right? Is it in 
another jira?

> Add an hdfs debug command to validate blocks, call recoverlease, etc.
> -
>
> Key: HDFS-6917
> URL: https://issues.apache.org/jira/browse/HDFS-6917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0
>
> Attachments: HDFS-6917.001.patch, HDFS-6917.002.patch, 
> HDFS-6917.003.patch, HDFS-6917.004.patch
>
>
> HDFS should have a debug command which could validate HDFS block files, call 
> recoverLease, and have some other functionality.  These commands would be 
> purely for debugging and would appear under a separate command hierarchy 
> inside the hdfs command.  There would be no guarantee of API stability for 
> these commands and the debug submenu would not be listed by just typing the 
> "hdfs" command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7147) Update archival storage user documentation

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7147:
--
Attachment: h7147_20141101.patch

h7147_20141101.patch:
- allows policyName to be case insensitive;
- removes blockStoragePolicy-default.xml;
- doc changes:
-* adds descriptions for SSD & Memory;
-* adds RAM_DISK storage type;
-* adds All_SSD, One_SSD and Lazy_Persist storage policies;
-* adds "hdfs storagepolicy" command.

> Update archival storage user documentation
> --
>
> Key: HDFS-7147
> URL: https://issues.apache.org/jira/browse/HDFS-7147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
> Attachments: h7147_20140926.patch, h7147_20141101.patch
>
>
> The Configurations section is no longer valid.  It should be removed.
> Also, if there are new APIs able to get in such as the addStoragePolicy API 
> proposed in HDFS-7076, the corresponding user documentation should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7147) Update archival storage user documentation

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7147:
--
Status: Patch Available  (was: Open)

> Update archival storage user documentation
> --
>
> Key: HDFS-7147
> URL: https://issues.apache.org/jira/browse/HDFS-7147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
> Attachments: h7147_20140926.patch, h7147_20141101.patch
>
>
> The Configurations section is no longer valid.  It should be removed.
> Also, if there are new APIs able to get in such as the addStoragePolicy API 
> proposed in HDFS-7076, the corresponding user documentation should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7323) Move the get/setStoragePolicy commands out from dfsadmin

2014-11-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7323:
-

 Summary: Move the get/setStoragePolicy commands out from dfsadmin
 Key: HDFS-7323
 URL: https://issues.apache.org/jira/browse/HDFS-7323
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze


After HDFS-7093, setting storage policy no longer requires superuser privilege. 
 We should move the get/setStoragePolicy commands out from dfsadmin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6735) A minor optimization to avoid pread() be blocked by read() inside the same DFSInputStream

2014-11-01 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193629#comment-14193629
 ] 

Lars Hofhansl commented on HDFS-6735:
-

As described in HDFS-6698, the potential performance gains for something like 
HBase are substantial.

I agree it's better to keep LocatedBlocks as not threadsafe and require called 
to lock accordingly.
I've not see fetchAt in a hot path (at least not from HBase usage patterns).
seek + read (non positional) cannot be done concurrently, agreed. pread should 
be possible, though.

How should we continue to move on this? Seems important. :)

Also open to suggestions about how to fix things in HBase (see last comment in 
HDFS-6698, about how HBase handles things and how limited concurrency "within" 
an InputStream is an issue).


> A minor optimization to avoid pread() be blocked by read() inside the same 
> DFSInputStream
> -
>
> Key: HDFS-6735
> URL: https://issues.apache.org/jira/browse/HDFS-6735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-6735-v2.txt, HDFS-6735.txt
>
>
> In current DFSInputStream impl, there're a couple of coarser-grained locks in 
> read/pread path, and it has became a HBase read latency pain point so far. In 
> HDFS-6698, i made a minor patch against the first encourtered lock, around 
> getFileLength, in deed, after reading code and testing, it shows still other 
> locks we could improve.
> In this jira, i'll make a patch against other locks, and a simple test case 
> to show the issue and the improved result.
> This is important for HBase application, since in current HFile read path, we 
> issue all read()/pread() requests in the same DFSInputStream for one HFile. 
> (Multi streams solution is another story i had a plan to do, but probably 
> will take more time than i expected)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6735) A minor optimization to avoid pread() be blocked by read() inside the same DFSInputStream

2014-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193634#comment-14193634
 ] 

Hadoop QA commented on HDFS-6735:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657297/HDFS-6735-v2.txt
  against trunk revision 5c0381c.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8624//console

This message is automatically generated.

> A minor optimization to avoid pread() be blocked by read() inside the same 
> DFSInputStream
> -
>
> Key: HDFS-6735
> URL: https://issues.apache.org/jira/browse/HDFS-6735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HDFS-6735-v2.txt, HDFS-6735.txt
>
>
> In current DFSInputStream impl, there're a couple of coarser-grained locks in 
> read/pread path, and it has became a HBase read latency pain point so far. In 
> HDFS-6698, i made a minor patch against the first encourtered lock, around 
> getFileLength, in deed, after reading code and testing, it shows still other 
> locks we could improve.
> In this jira, i'll make a patch against other locks, and a simple test case 
> to show the issue and the improved result.
> This is important for HBase application, since in current HFile read path, we 
> issue all read()/pread() requests in the same DFSInputStream for one HFile. 
> (Multi streams solution is another story i had a plan to do, but probably 
> will take more time than i expected)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7147) Update archival storage user documentation

2014-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14193660#comment-14193660
 ] 

Hadoop QA commented on HDFS-7147:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678735/h7147_20141101.patch
  against trunk revision 5c0381c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8623//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8623//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8623//console

This message is automatically generated.

> Update archival storage user documentation
> --
>
> Key: HDFS-7147
> URL: https://issues.apache.org/jira/browse/HDFS-7147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
> Attachments: h7147_20140926.patch, h7147_20141101.patch
>
>
> The Configurations section is no longer valid.  It should be removed.
> Also, if there are new APIs able to get in such as the addStoragePolicy API 
> proposed in HDFS-7076, the corresponding user documentation should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)