Auto-Re: [jira] [Updated] (HDFS-8579) Update HDFS usage with missing options

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8579) Update HDFS usage with missing options

2015-06-29 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8579:
-
Attachment: HDFS-8579.2.patch

Thanks [~vinayrpet] for reviewing the patch.
Updated the patch as per your comments
Please review.

> Update HDFS usage with missing options
> --
>
> Key: HDFS-8579
> URL: https://issues.apache.org/jira/browse/HDFS-8579
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-8579-branch-2.7-1.patch, HDFS-8579-trunk-1.patch, 
> HDFS-8579.2.patch
>
>
> Update hdfs usage with missing options (fetchdt and debug)
> {noformat}
>  1
> ./hdfs fetchdt
> fetchdt  
> Options:
>   --webservice   Url to contact NN on
>   --renewer Name of the delegation token renewer
>   --cancelCancel the delegation token
>   --renew Renew the delegation token.  Delegation token must have 
> been fetched using the --renewer  option.
>   --print Print the delegation token
> 2
>  ./hdfs debug
> Usage: hdfs debug  [arguments]
> verify [-meta ] [-block ]
> recoverLease [-path ] [-retries ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8627) NPE thrown if unable to fetch token from Namenode

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8627) NPE thrown if unable to fetch token from Namenode

2015-06-29 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8627:
-
Attachment: HDFS-8627.2.patch

Thanks [~vinayrpet] for reviewing .
Attached an updated patch.
Please review.

> NPE thrown if unable to fetch token from Namenode
> -
>
> Key: HDFS-8627
> URL: https://issues.apache.org/jira/browse/HDFS-8627
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-8627.1.patch, HDFS-8627.2.patch
>
>
> DelegationTokenFetcher#saveDelegationToken
> Missed to check if token is null.
> {code}
> Token token = fs.getDelegationToken(renewer);
> Credentials cred = new Credentials();
> cred.addToken(token.getKind(), token);
> {code}
> {noformat}
> XX:~/hadoop/namenode/bin> ./hdfs fetchdt --renewer Rex 
> /home/REX/file1
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.saveDelegationToken(DelegationTokenFetcher.java:181)
> at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher$1.run(DelegationTokenFetcher.java:126)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:314)
> at javax.security.auth.Subject.doAs(Subject.java:572)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
> at 
> org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.main(DelegationTokenFetcher.java:114)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607552#comment-14607552
 ] 

Vinayakumar B commented on HDFS-8692:
-

Since the changes are only in common module file, hdfs tests are not running.

To Verify, I suggest to add one more test in common itself, with 
LocalFileSystem may be {{TestLocalFileSystemContract}}. And move this Jira to 
common.

Coming to patch, 
patch just reverts the earlier change done in HADOOP-12009.
Instead, it can be fixed as below, by keeping intention of HADOOP-12009 intact.
{code}
 paths = fs.listStatus(path("/test/hadoop"));
 assertEquals(3, paths.length);
-ArrayList list = new ArrayList();
+ArrayList list = new ArrayList();
 for (FileStatus fileState : paths) {
-  list.add(fileState.getPath().toString());
+  list.add(fileState.getPath());
 }
 assertTrue(list.contains(path("/test/hadoop/a")));
 assertTrue(list.contains(path("/test/hadoop/b")));
{code}

> Fix test case failures o.a.h.h.TestHDFSFileSystemContract and 
> TestWebHdfsFileSystemContract.testListStatus
> --
>
> Key: HDFS-8692
> URL: https://issues.apache.org/jira/browse/HDFS-8692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8692-001
>
>
>  *Jenkin Report* 
> https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/
>  *Error Log* 
> {noformat}
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertTrue(Assert.java:31)
>   at junit.framework.TestCase.assertTrue(TestCase.java:201)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Assigned] (HDFS-8691) Cleanup BlockScanner initialization and add test for configuration contract

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Assigned] (HDFS-8691) Cleanup BlockScanner initialization and add test for configuration contract

2015-06-29 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N reassigned HDFS-8691:
--

Assignee: Jagadesh Kiran N

> Cleanup BlockScanner initialization and add test for configuration contract
> ---
>
> Key: HDFS-8691
> URL: https://issues.apache.org/jira/browse/HDFS-8691
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Reporter: Arpit Agarwal
>Assignee: Jagadesh Kiran N
>
> The initialization of the BlockScanner can be simplified by moving out test 
> hooks. Tests can be modified to use configuration only.
> Also we need an additional test case to verify the behavior with positive, 
> negative and zero values of {{dfs.datanode.scan.period.hours}} for 
> compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-06-29 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607534#comment-14607534
 ] 

Rakesh R commented on HDFS-8493:


Thanks [~wheat9] for the great help in reviewing and committing the patch.
Thanks [~vinayrpet] for the great help in reviews.

> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-8493-001.patch, HDFS-8493-002.patch, 
> HDFS-8493-003.patch, HDFS-8493-004.patch, HDFS-8493-005.patch, 
> HDFS-8493-006.patch, HDFS-8493-007.patch, HDFS-8493-007.patch, 
> HDFS-8493-009.patch, HDFS-8493-010.patch, HDFS-8493-011.patch
>
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-5517) Lower the default maximum number of blocks per file

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-6540) TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-6540) TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607529#comment-14607529
 ] 

Hadoop QA commented on HDFS-6540:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12650599/HDFS-6540-branch-2.4.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | branch-2 / eccd4f2 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11543/console |


This message was automatically generated.

> TestOfflineImageViewer.outputOfLSVisitor fails for certain usernames
> 
>
> Key: HDFS-6540
> URL: https://issues.apache.org/jira/browse/HDFS-6540
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6540-branch-2.4.patch, HDFS-6540.patch
>
>
> TestOfflineImageViewer.outputOfLSVisitor() fails if the username contains "-" 
> (dash). A dash is a valid character in a username.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5517) Lower the default maximum number of blocks per file

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607527#comment-14607527
 ] 

Hadoop QA commented on HDFS-5517:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12614116/HDFS-5517.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11542/console |


This message was automatically generated.

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607503#comment-14607503
 ] 

Vinayakumar B commented on HDFS-8468:
-

Thanks [~hitliuyi]. Will commit it shortly

> 2 RPC calls for every file read in DFSClient#open(..) resulting in double 
> Audit log entries
> ---
>
> Key: HDFS-8468
> URL: https://issues.apache.org/jira/browse/HDFS-8468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8468-HDFS-7285-02.patch, HDFS-8468-HDFS-7285.patch
>
>
> In HDFS-7285 branch, 
> To determine whether file is striped/not and get the Schema for the file, 2 
> RPCs done to Namenode.
> This is resulting in double audit logs for every file read for both 
> striped/non-striped.
> This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607501#comment-14607501
 ] 

Hadoop QA commented on HDFS-8673:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  7s | The applied patch generated  3 
new checkstyle issues (total was 15, now 18). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 49s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m  0s | Tests passed in 
hadoop-common. |
| | |  61m 18s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742686/HDFS-8673.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11540/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11540/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11540/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11540/console |


This message was automatically generated.

> HDFS reports file already exists if there is a file/dir name end with 
> ._COPYING_
> 
>
> Key: HDFS-8673
> URL: https://issues.apache.org/jira/browse/HDFS-8673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Chen He
> Attachments: HDFS-8673.000-WIP.patch, HDFS-8673.000.patch, 
> HDFS-8673.001.patch, HDFS-8673.002.patch
>
>
> Because CLI is using CommandWithDestination.java which add "._COPYING_" to 
> the tail of file name when it does the copy. It will cause problem if there 
> is a file/dir already called *._COPYING_ on HDFS.
> For file:
> -bash-4.1$ hadoop fs -put 5M /user/occ/
> -bash-4.1$ hadoop fs -mv /user/occ/5M /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup5242880 2015-06-26 05:16 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup 131072 2015-06-26 05:19 /user/occ/5M
> For dir:
> -bash-4.1$ hadoop fs -mkdir /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> drwxr-xr-x   - occ supergroup  0 2015-06-26 05:24 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> put: /user/occ/5M._COPYING_ already exists as a directory
> -bash-4.1$ hadoop fs -ls /user/occ/
> (/user/occ/5M._COPYING_ is gone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607499#comment-14607499
 ] 

Yi Liu commented on HDFS-8468:
--

+1, thanks Vinay.

> 2 RPC calls for every file read in DFSClient#open(..) resulting in double 
> Audit log entries
> ---
>
> Key: HDFS-8468
> URL: https://issues.apache.org/jira/browse/HDFS-8468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8468-HDFS-7285-02.patch, HDFS-8468-HDFS-7285.patch
>
>
> In HDFS-7285 branch, 
> To determine whether file is striped/not and get the Schema for the file, 2 
> RPCs done to Namenode.
> This is resulting in double audit logs for every file read for both 
> striped/non-striped.
> This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607495#comment-14607495
 ] 

Hadoop QA commented on HDFS-8692:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | patch |   0m  1s | The patch file was not named 
according to hadoop's naming conventions. Please see 
https://wiki.apache.org/hadoop/HowToContribute for instructions. |
| {color:red}-1{color} | pre-patch |   5m 50s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 42s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 27s | Tests passed in 
hadoop-common. |
| | |  41m 13s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742700/HDFS-8692-001 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11541/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11541/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11541/console |


This message was automatically generated.

> Fix test case failures o.a.h.h.TestHDFSFileSystemContract and 
> TestWebHdfsFileSystemContract.testListStatus
> --
>
> Key: HDFS-8692
> URL: https://issues.apache.org/jira/browse/HDFS-8692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8692-001
>
>
>  *Jenkin Report* 
> https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/
>  *Error Log* 
> {noformat}
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertTrue(Assert.java:31)
>   at junit.framework.TestCase.assertTrue(TestCase.java:201)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This 

Auto-Re: [jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607199#comment-14607199
 ] 

Vinayakumar B commented on HDFS-8468:
-

Thanks [~hitliuyi] for the review,
Updated patch.

> 2 RPC calls for every file read in DFSClient#open(..) resulting in double 
> Audit log entries
> ---
>
> Key: HDFS-8468
> URL: https://issues.apache.org/jira/browse/HDFS-8468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8468-HDFS-7285-02.patch, HDFS-8468-HDFS-7285.patch
>
>
> In HDFS-7285 branch, 
> To determine whether file is striped/not and get the Schema for the file, 2 
> RPCs done to Namenode.
> This is resulting in double audit logs for every file read for both 
> striped/non-striped.
> This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8468:

Target Version/s: HDFS-7285

> 2 RPC calls for every file read in DFSClient#open(..) resulting in double 
> Audit log entries
> ---
>
> Key: HDFS-8468
> URL: https://issues.apache.org/jira/browse/HDFS-8468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8468-HDFS-7285-02.patch, HDFS-8468-HDFS-7285.patch
>
>
> In HDFS-7285 branch, 
> To determine whether file is striped/not and get the Schema for the file, 2 
> RPCs done to Namenode.
> This is resulting in double audit logs for every file read for both 
> striped/non-striped.
> This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8468:

Attachment: HDFS-8468-HDFS-7285-02.patch

> 2 RPC calls for every file read in DFSClient#open(..) resulting in double 
> Audit log entries
> ---
>
> Key: HDFS-8468
> URL: https://issues.apache.org/jira/browse/HDFS-8468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8468-HDFS-7285-02.patch, HDFS-8468-HDFS-7285.patch
>
>
> In HDFS-7285 branch, 
> To determine whether file is striped/not and get the Schema for the file, 2 
> RPCs done to Namenode.
> This is resulting in double audit logs for every file read for both 
> striped/non-striped.
> This will be a major impact in size of audit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8468) 2 RPC calls for every file read in DFSClient#open(..) resulting in double Audit log entries

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607064#comment-14607064
 ] 

Hadoop QA commented on HDFS-8697:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 45s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  1 
new checkstyle issues (total was 3, now 3). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 16s | Tests failed in hadoop-hdfs. |
| | | 207m 22s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742667/HDFS-8697.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11539/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11539/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11539/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11539/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11539/console |


This message was automatically generated.

> Refactor DecommissionManager: more generic method names and misc cleanup
> 
>
> Key: HDFS-8697
> URL: https://issues.apache.org/jira/browse/HDFS-8697
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8697.00.patch
>
>
> This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
> branch, including changing a few method names to be more generic 
> ({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-6128) Implement libhdfs bindings for HDFS ACL APIs.

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6450) Support non-positional hedged reads in HDFS

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6122) Rebalance cached replicas between datanodes

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6884) Include the hostname in HTTPFS log filenames

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-7317) Rename StoragePolicy

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-5293) Symlink resolution requires unnecessary RPCs

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6212) Deprecate the BackupNode and CheckpointNode from branch-2

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6782) Improve FS editlog logSync

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-7030) Add more unit tests for DatanodeStorageInfo

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-7174) Support for more efficient large directories

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-5284) Flatten INode hierarchy

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6846) NetworkTopology#sortByDistance should give nodes higher priority, which cache the block.

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-7318) Rename some of the policies in default StoragePolicySuite

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-6658) Namenode memory optimization - Block replicas list

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-7087) Ability to list /.reserved

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-4754) Add an API in the namenode to mark a datanode as stale

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-6128) Implement libhdfs bindings for HDFS ACL APIs.

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6128:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Implement libhdfs bindings for HDFS ACL APIs.
> -
>
> Key: HDFS-6128
> URL: https://issues.apache.org/jira/browse/HDFS-6128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5293) Symlink resolution requires unnecessary RPCs

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-5293:
--
Target Version/s: 2.8.0  (was: 3.0.0, 2.4.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Symlink resolution requires unnecessary RPCs
> 
>
> Key: HDFS-5293
> URL: https://issues.apache.org/jira/browse/HDFS-5293
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> When the NN encounters a symlink, it throws an {{UnresolvedLinkException}}.  
> This exception contains only the path that is a symlink.  The client issues 
> another RPC to obtain the link target, followed by another RPC with the link 
> target + remainder of the original path.
> {{UnresolvedLinkException}} should be returning both the link and the target 
> to avoid a costly and unnecessary intermediate RPC to obtain the link target.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4754) Add an API in the namenode to mark a datanode as stale

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-4754:
--
Target Version/s: 2.8.0  (was: 3.0.0, 2.5.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Add an API in the namenode to mark a datanode as stale
> --
>
> Key: HDFS-4754
> URL: https://issues.apache.org/jira/browse/HDFS-4754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: 4754.v1.patch, 4754.v2.patch, 4754.v4.patch, 
> 4754.v4.patch
>
>
> There is a detection of the stale datanodes in HDFS since HDFS-3703, with a 
> timeout, defaulted to 30s.
> There are two reasons to add an API to mark a node as stale even if the 
> timeout is not yet reached:
>  1) ZooKeeper can detect that a client is dead at any moment. So, for HBase, 
> we sometimes start the recovery before a node is marked staled. (even with 
> reasonable settings as: stale: 20s; HBase ZK timeout: 30s
>  2) Some third parties could detect that a node is dead before the timeout, 
> hence saving us the cost of retrying. An example or such hw is Arista, 
> presented here by [~tsuna] 
> http://tsunanet.net/~tsuna/fsf-hbase-meetup-april13.pdf, and confirmed in 
> HBASE-6290.
> As usual, even if the node is dead it can comeback before the 10 minutes 
> limit. So I would propose to set a timebound. The API would be
> namenode.markStale(String ipAddress, int port, long durationInMs);
> After durationInMs, the namenode would again rely only on its heartbeat to 
> decide.
> Thoughts?
> If there is no objections, and if nobody in the hdfs dev team has the time to 
> spend some time on it, I will give it a try for branch 2 & 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6884) Include the hostname in HTTPFS log filenames

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6884:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Include the hostname in HTTPFS log filenames
> 
>
> Key: HDFS-6884
> URL: https://issues.apache.org/jira/browse/HDFS-6884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Andrew Wang
>Assignee: Alejandro Abdelnur
>
> It'd be good to include the hostname in the httpfs log filenames. Right now 
> we have httpfs.log and httpfs-audit.log, it'd be nice to have e.g. 
> "httpfs-${hostname}.log".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6658) Namenode memory optimization - Block replicas list

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6658:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Namenode memory optimization - Block replicas list 
> ---
>
> Key: HDFS-6658
> URL: https://issues.apache.org/jira/browse/HDFS-6658
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.1
>Reporter: Amir Langer
>Assignee: Daryn Sharp
>  Labels: BB2015-05-TBR
> Attachments: BlockListOptimizationComparison.xlsx, BlocksMap 
> redesign.pdf, HDFS-6658.patch, HDFS-6658.patch, HDFS-6658.patch, Namenode 
> Memory Optimizations - Block replicas list.docx, New primative indexes.jpg, 
> Old triplets.jpg
>
>
> Part of the memory consumed by every BlockInfo object in the Namenode is a 
> linked list of block references for every DatanodeStorageInfo (called 
> "triplets"). 
> We propose to change the way we store the list in memory. 
> Using primitive integer indexes instead of object references will reduce the 
> memory needed for every block replica (when compressed oops is disabled) and 
> in our new design the list overhead will be per DatanodeStorageInfo and not 
> per block replica.
> see attached design doc. for details and evaluation results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Updated] (HDFS-5284) Flatten INode hierarchy

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-5284:
--
Target Version/s: 2.8.0  (was: 2.4.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Flatten INode hierarchy
> ---
>
> Key: HDFS-5284
> URL: https://issues.apache.org/jira/browse/HDFS-5284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>
> Currently, we have a complicated inode hierarchy for representing different 
> states of a file or a directory.  For example,  when a file is being created, 
> it is represented by an INodeFileUnderConstruction.  When a file is being 
> closed, the inode is replaced by an INodeFile.  If it is reopened for append, 
> the inode is replaced again by an INodeFileUnderConstruction.  This JIRA is 
> to flatten the inode hierarchy.  We may also improve the performance by 
> eliminating the inode replacement in runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7317) Rename StoragePolicy

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7317:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Rename StoragePolicy
> 
>
> Key: HDFS-7317
> URL: https://issues.apache.org/jira/browse/HDFS-7317
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>
> As discussed on HDFS-7285, StoragePolicy might not be the best name for what 
> StoragePolicy currently is, which is a hardcoded mapping to replica 
> StorageTypes with a fallback, and being able to specify a StorageType for 
> creation vs. replication). Ideally the "policy" is what determines the data 
> temperature in the first place, with the temperature then mapping to the 
> actual StorageTypes to use.
> There were a number of suggestions presented, e.g. StorageTag, 
> StoragePolicyTag. Let's figure this out here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8555:
--
Target Version/s: 2.8.0  (was: 2.5.2)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Random read support on HDFS files using Indexed Namenode feature
> 
>
> Key: HDFS-8555
> URL: https://issues.apache.org/jira/browse/HDFS-8555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, hdfs-client, namenode
>Affects Versions: 2.5.2
> Environment: Linux
>Reporter: amit sehgal
>Assignee: amit sehgal
> Fix For: 3.0.0
>
>   Original Estimate: 720h
>  Remaining Estimate: 720h
>
> Currently Namenode does not provide support to do random reads. With so many 
> tools built on top of HDFS solving the use case of Exploratory BI and 
> providing SQL over HDFS. The need of hour is to reduce the number of blocks 
> read for a Random read. 
> E.g. extracting say 10 lines worth of information out of 100GB files should 
> be reading only those block which can potentially have those 10 lines.
> This can be achieved by adding a tagging feature per block in name node, each 
> block written to HDFS will have tags associated to it stored in index.
> Namednode when access via the Indexing feature will use this index native to 
> reduce the no. of block returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6450) Support non-positional hedged reads in HDFS

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6450:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Support non-positional hedged reads in HDFS
> ---
>
> Key: HDFS-6450
> URL: https://issues.apache.org/jira/browse/HDFS-6450
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6450-like-pread.txt
>
>
> HDFS-5776 added support for hedged positional reads.  We should also support 
> hedged non-position reads (aka regular reads).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7087) Ability to list /.reserved

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7087:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Ability to list /.reserved
> --
>
> Key: HDFS-7087
> URL: https://issues.apache.org/jira/browse/HDFS-7087
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>
> We have two special paths within /.reserved now, /.reserved/.inodes and 
> /.reserved/raw. It seems like we should be able to list /.reserved to see 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7030) Add more unit tests for DatanodeStorageInfo

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7030:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Add more unit tests for DatanodeStorageInfo
> ---
>
> Key: HDFS-7030
> URL: https://issues.apache.org/jira/browse/HDFS-7030
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Priority: Minor
>
> Currently we already have unit tests covering DatanodeDescriptor. As pointed 
> out by [~ozawa] in HDFS-6943, we should add more unit tests for 
> DatanodeStorageInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6846) NetworkTopology#sortByDistance should give nodes higher priority, which cache the block.

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6846:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> NetworkTopology#sortByDistance should give nodes higher priority, which cache 
> the block.
> 
>
> Key: HDFS-6846
> URL: https://issues.apache.org/jira/browse/HDFS-6846
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>
> Currently there are 3 weights:
> * local
> * same rack
> * off rack
> But if some nodes cache the block, then it's faster if client read block from 
> these nodes. So we should have some more weights as following:
> * local
> * cached & same rack
> * same rack
> * cached & off rack
> * off rack



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6782) Improve FS editlog logSync

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6782:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Improve FS editlog logSync
> --
>
> Key: HDFS-6782
> URL: https://issues.apache.org/jira/browse/HDFS-6782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.1
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6782.001.patch, HDFS-6782.002.patch
>
>
> In NN, it uses a double buffer (bufCurrent, bufReady) for log sync, 
> bufCurrent it to buffer new coming edit ops and bufReady is for flushing. 
> This's efficient. When flush is ongoing, and bufCurrent is full, NN goes to 
> force log sync, and all new Ops are blocked (since force log sync is 
> protected by FSNameSystem write lock). After the flush finished, the new Ops 
> are still blocked, but actually at this time, bufCurrent is free and Ops can 
> go ahead and write to the buffer. The following diagram shows the detail. 
> This JIRA is for this improvement.  Thanks [~umamaheswararao] for confirming 
> this issue.
> {code}
> edit1(txid1) -- write to bufCurrent  logSync - (swap 
> buffer)flushing ---
> edit2(txid2) -- write to bufCurrent  logSync - waiting 
> ---
> edit3(txid3) -- write to bufCurrent  logSync - waiting 
> ---
> edit4(txid4) -- write to bufCurrent  logSync - waiting 
> ---
> edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
> ---
> edit6(txid6) -- blocked
> ...
> editn(txidn) -- blocked
> {code}
> After the flush, it becomes
> {code}
> edit1(txid1) -- write to bufCurrent  logSync - finished 
> 
> edit2(txid2) -- write to bufCurrent  logSync - flushing 
> ---
> edit3(txid3) -- write to bufCurrent  logSync - waiting 
> ---
> edit4(txid4) -- write to bufCurrent  logSync - waiting 
> ---
> edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
> ---
> edit6(txid6) -- blocked
> ...
> editn(txidn) -- blocked
> {code}
> After edit1 finished, bufCurrent is free, and the thread which flushes txid2 
> will also flushes txid3-5, so we should return from the force sync of edit5 
> and FSNamesystem write lock will be freed (Don't worry that edit5 Op will 
> return, since there will be a normal logSync after the force logSync and 
> there will wait for sync finished). This is the idea of this JIRA. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7318) Rename some of the policies in default StoragePolicySuite

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7318:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Rename some of the policies in default StoragePolicySuite
> -
>
> Key: HDFS-7318
> URL: https://issues.apache.org/jira/browse/HDFS-7318
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Andrew Wang
>
> Right now we have default policies named based on temperature, e.g. HOT, 
> COLD, but also the storage type like ONESSD, ALLSSD, MEMORY. This seems 
> inconsistent, let's consider renaming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7174) Support for more efficient large directories

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7174:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Support for more efficient large directories
> 
>
> Key: HDFS-7174
> URL: https://issues.apache.org/jira/browse/HDFS-7174
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7174.new.patch, HDFS-7174.patch, HDFS-7174.patch
>
>
> When the number of children under a directory grows very large, insertion 
> becomes very costly.  E.g. creating 1M entries takes 10s of minutes.  This is 
> because the complexity of an insertion is O\(n\). As the size of a list 
> grows, the overhead grows n^2. (integral of linear function).  It also causes 
> allocations and copies of big arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6212) Deprecate the BackupNode and CheckpointNode from branch-2

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6212:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Deprecate the BackupNode and CheckpointNode from branch-2
> -
>
> Key: HDFS-6212
> URL: https://issues.apache.org/jira/browse/HDFS-6212
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jing Zhao
>
> As per discussion in HDFS-4114, this jira tries to deprecate BackupNode from 
> branch-2 and change the hadoop start/stop scripts to print deprecation 
> warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6122) Rebalance cached replicas between datanodes

2015-06-29 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6122:
--
Target Version/s: 2.8.0  (was: 2.6.0)

Moving features/enhancements out of previously closed releases into the next 
minor release 2.8.0.

> Rebalance cached replicas between datanodes
> ---
>
> Key: HDFS-6122
> URL: https://issues.apache.org/jira/browse/HDFS-6122
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.3.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> It'd be nice if the NameNode was able to rebalance cache usage among 
> datanodes. This would help avoid situations where the only three DNs with a 
> replica are full and there is still cache space on the rest of the cluster. 
> It'll also probably help for heterogeneous node sizes and when adding new 
> nodes to the cluster or doing a rolling restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606992#comment-14606992
 ] 

Hadoop QA commented on HDFS-8694:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 35s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:red}-1{color} | javac |   9m  3s | The applied patch generated  2  
additional warning messages. |
| {color:green}+1{color} | javadoc |  11m 22s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 27s | The applied patch generated  4 
new checkstyle issues (total was 445, now 441). |
| {color:green}+1{color} | whitespace |   0m  3s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 12s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 177m 31s | Tests failed in hadoop-hdfs. |
| | | 232m  4s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.TestDataTransferKeepalive |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs 
|
|   | org.apache.hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742648/HDFS-8694.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11538/console |


This message was automatically generated.

> Expose the stats of IOErrors on each FsVolume through JMX
> -
>
> Key: HDFS-8694
> URL: https://issues.apache.org/jira/browse/HDFS-8694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8694.000.patch
>
>
> Currently, once DataNode hits an {{IOError}} when writing / reading block 
> files, it starts a background {{DiskChecker.checkDirs()}} thread. But if this 
> thread successfully finishes, DN does not record this {{IOError}}. 
> We need one measurement to count all {{IOErrors}} for each volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606891#comment-14606891
 ] 

Brahma Reddy Battula commented on HDFS-8692:


Attached the patch..Kindly Review..


> Fix test case failures o.a.h.h.TestHDFSFileSystemContract and 
> TestWebHdfsFileSystemContract.testListStatus
> --
>
> Key: HDFS-8692
> URL: https://issues.apache.org/jira/browse/HDFS-8692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8692-001
>
>
>  *Jenkin Report* 
> https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/
>  *Error Log* 
> {noformat}
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertTrue(Assert.java:31)
>   at junit.framework.TestCase.assertTrue(TestCase.java:201)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8692:
---
Status: Patch Available  (was: Open)

> Fix test case failures o.a.h.h.TestHDFSFileSystemContract and 
> TestWebHdfsFileSystemContract.testListStatus
> --
>
> Key: HDFS-8692
> URL: https://issues.apache.org/jira/browse/HDFS-8692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8692-001
>
>
>  *Jenkin Report* 
> https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/
>  *Error Log* 
> {noformat}
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertTrue(Assert.java:31)
>   at junit.framework.TestCase.assertTrue(TestCase.java:201)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8692:
---
Attachment: HDFS-8692-001

> Fix test case failures o.a.h.h.TestHDFSFileSystemContract and 
> TestWebHdfsFileSystemContract.testListStatus
> --
>
> Key: HDFS-8692
> URL: https://issues.apache.org/jira/browse/HDFS-8692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8692-001
>
>
>  *Jenkin Report* 
> https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/
>  *Error Log* 
> {noformat}
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:55)
>   at junit.framework.Assert.assertTrue(Assert.java:22)
>   at junit.framework.Assert.assertTrue(Assert.java:31)
>   at junit.framework.TestCase.assertTrue(TestCase.java:201)
>   at 
> org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Created] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_

2015-06-29 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-8673:
--
Attachment: HDFS-8673.002.patch

Patch Updated. HDFS-8698 is create. Thank you for reviewing my patches, 
[~ste...@apache.org]. Please feel free to add your comments.

> HDFS reports file already exists if there is a file/dir name end with 
> ._COPYING_
> 
>
> Key: HDFS-8673
> URL: https://issues.apache.org/jira/browse/HDFS-8673
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Chen He
> Attachments: HDFS-8673.000-WIP.patch, HDFS-8673.000.patch, 
> HDFS-8673.001.patch, HDFS-8673.002.patch
>
>
> Because CLI is using CommandWithDestination.java which add "._COPYING_" to 
> the tail of file name when it does the copy. It will cause problem if there 
> is a file/dir already called *._COPYING_ on HDFS.
> For file:
> -bash-4.1$ hadoop fs -put 5M /user/occ/
> -bash-4.1$ hadoop fs -mv /user/occ/5M /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup5242880 2015-06-26 05:16 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> -rw-r--r--   1 occ supergroup 131072 2015-06-26 05:19 /user/occ/5M
> For dir:
> -bash-4.1$ hadoop fs -mkdir /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -ls /user/occ/
> Found 1 items
> drwxr-xr-x   - occ supergroup  0 2015-06-26 05:24 
> /user/occ/5M._COPYING_
> -bash-4.1$ hadoop fs -put 128K /user/occ/5M
> put: /user/occ/5M._COPYING_ already exists as a directory
> -bash-4.1$ hadoop fs -ls /user/occ/
> (/user/occ/5M._COPYING_ is gone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file

2015-06-29 Thread Chen He (JIRA)
Chen He created HDFS-8698:
-

 Summary: Add "-direct" flag option for fs copy so that user can 
choose not to create "._COPYING_" file
 Key: HDFS-8698
 URL: https://issues.apache.org/jira/browse/HDFS-8698
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.7.0
Reporter: Chen He


Because CLI is using CommandWithDestination.java which add "._COPYING_" to the 
tail of file name when it does the copy. For blobstore like S3 and Swift, to 
create "._COPYING_" file and rename it is expensive. "-direct" flag can allow 
user to avoiding the "._COPYING_" file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8687) Remove the duplicate usage message from Dfsck.java

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8687) Remove the duplicate usage message from Dfsck.java

2015-06-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606811#comment-14606811
 ] 

Brahma Reddy Battula commented on HDFS-8687:


[~arpitagarwal] Thanks a lot for review and commit.

> Remove the duplicate usage message from Dfsck.java
> --
>
> Key: HDFS-8687
> URL: https://issues.apache.org/jira/browse/HDFS-8687
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8687-002.patch, HDFS-8687.patch
>
>
> Toolrunner also give same usage message,, I think , we can remove
> {{printUsage(System.err);}}
> {code}
> if ((args.length == 0) || ("-files".equals(args[0]))) {
>   printUsage(System.err);
>   ToolRunner.printGenericCommandUsage(System.err);
> } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-29 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HDFS-8598:
-
Resolution: Invalid
Status: Resolved  (was: Patch Available)

> Add and optimize for get LocatedFileStatus  in DFSClient
> 
>
> Key: HDFS-8598
> URL: https://issues.apache.org/jira/browse/HDFS-8598
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HDFS-8598.001.patch, HDFS-8598.002.patch
>
>
> If we want to get all files block locations in one directory, we have to call 
> getFileBlockLocations for each file, it will take long time because of too 
> many request. 
> LocatedFileStatus has block location, but we can find it also call 
> getFileBlockLocations  for each file in DFSClient. this jira is trying to 
> optimize with only one RPC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-29 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606798#comment-14606798
 ] 

Yong Zhang commented on HDFS-8598:
--

Hi [~andrew.wang], thanks for your comment, I will close it.

> Add and optimize for get LocatedFileStatus  in DFSClient
> 
>
> Key: HDFS-8598
> URL: https://issues.apache.org/jira/browse/HDFS-8598
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HDFS-8598.001.patch, HDFS-8598.002.patch
>
>
> If we want to get all files block locations in one directory, we have to call 
> getFileBlockLocations for each file, it will take long time because of too 
> many request. 
> LocatedFileStatus has block location, but we can find it also call 
> getFileBlockLocations  for each file in DFSClient. this jira is trying to 
> optimize with only one RPC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606777#comment-14606777
 ] 

Andrew Wang commented on HDFS-8697:
---

Hi Zhe, thanks for working on this, a couple review comments:

* "stored block" has a meaning elsewhere in the code base, we should use a 
different term. Not sure what though. Sufficiently redundant? Sufficiently 
durable?
* Many log messages and comments still talk about replication.
* Explaining the expectations of decomming a DN with EC blocks somewhere would 
be good too. Do we treat them like 1-repl blocks? Or allow a block to dip below 
full-strength redundancy? It seems like the former (which I agree with), but 
should write this down somewhere.

> Refactor DecommissionManager: more generic method names and misc cleanup
> 
>
> Key: HDFS-8697
> URL: https://issues.apache.org/jira/browse/HDFS-8697
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8697.00.patch
>
>
> This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
> branch, including changing a few method names to be more generic 
> ({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8697:

Attachment: HDFS-8697.00.patch

> Refactor DecommissionManager: more generic method names and misc cleanup
> 
>
> Key: HDFS-8697
> URL: https://issues.apache.org/jira/browse/HDFS-8697
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8697.00.patch
>
>
> This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
> branch, including changing a few method names to be more generic 
> ({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Updated] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Created] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8697:
---

 Summary: Refactor DecommissionManager: more generic method names 
and misc cleanup
 Key: HDFS-8697
 URL: https://issues.apache.org/jira/browse/HDFS-8697
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang


This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
branch, including changing a few method names to be more generic 
({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Created] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8697:

Status: Patch Available  (was: Open)

> Refactor DecommissionManager: more generic method names and misc cleanup
> 
>
> Key: HDFS-8697
> URL: https://issues.apache.org/jira/browse/HDFS-8697
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
> branch, including changing a few method names to be more generic 
> ({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-29 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606741#comment-14606741
 ] 

Kai Zheng commented on HDFS-7285:
-

Thanks [~zhz] for the great work! I will have some time to look at some parts 
I'm familiar with.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-bistriped.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Created] (HDFS-8696) Small reads are blocked by large long running reads

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

Auto-Re: [jira] [Commented] (HDFS-8664) Allow wildcards in dfs.datanode.data.dir

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Created] (HDFS-8696) Small reads are blocked by large long running reads

2015-06-29 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-8696:
---

 Summary: Small reads are blocked by large long running reads
 Key: HDFS-8696
 URL: https://issues.apache.org/jira/browse/HDFS-8696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou
Priority: Blocker


There is an issue that appears related to the webhdfs server. When making two 
concurrent requests, the DN will sometimes pause for extended periods (I've 
seen 1-300 seconds), killing performance and dropping connections. 

To reproduce: 
1. set up a HDFS cluster
2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
the time out to /tmp/times.txt
{noformat}
i=1
while (true); do 
echo $i
let i++
/usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
"http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN&user.name=root&length=1";
done
{noformat}

3. Watch for 1-byte requests that take more than one second:
tail -F /tmp/times.txt | grep -E "^[^0]"

4. After it has had a chance to warm up, start doing large transfers from
another shell:
{noformat}
i=1
while (true); do 
echo $i
let i++
(/usr/bin/time -f %e curl -s -L -o /dev/null 
"http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN&user.name=root");
done
{noformat}

It's easy to find after a minute or two that small reads will sometimes
pause for 1-300 seconds. In some extreme cases, it appears that the
transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8664) Allow wildcards in dfs.datanode.data.dir

2015-06-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606726#comment-14606726
 ] 

Hadoop QA commented on HDFS-8664:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 24s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 7 new or modified test files. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m 20s | The applied patch generated  6 
new checkstyle issues (total was 140, now 145). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  6s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 14s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 165m  3s | Tests failed in hadoop-hdfs. |
| | | 240m 43s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742582/HDFS-8664.002.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / fad291e |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11536/console |


This message was automatically generated.

> Allow wildcards in dfs.datanode.data.dir
> 
>
> Key: HDFS-8664
> URL: https://issues.apache.org/jira/browse/HDFS-8664
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, HDFS
>Affects Versions: 3.0.0
>Reporter: Patrick White
>Assignee: Patrick White
> Attachments: HDFS-8664.001.patch, HDFS-8664.002.patch
>
>
> We have many disks per machine (12+) that don't always have the same 
> numbering when they come back from provisioning, but they're always in the 
> same tree following the same pattern.
> It would greatly reduce our config complexity to be able to specify a 
> wildcard for all the data directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

[jira] [Updated] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8694:

Status: Patch Available  (was: Open)

> Expose the stats of IOErrors on each FsVolume through JMX
> -
>
> Key: HDFS-8694
> URL: https://issues.apache.org/jira/browse/HDFS-8694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8694.000.patch
>
>
> Currently, once DataNode hits an {{IOError}} when writing / reading block 
> files, it starts a background {{DiskChecker.checkDirs()}} thread. But if this 
> thread successfully finishes, DN does not record this {{IOError}}. 
> We need one measurement to count all {{IOErrors}} for each volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8694:

Attachment: HDFS-8694.000.patch

The # of IOerror is kept in {{FsVolumes}} and exposes it as 
{{VolumeInfo}}:{{errors}} in {{JMX}}:

{code}
{
"name" : "Hadoop:service=DataNode,name=DataNodeInfo",
"modelerType" : "org.apache.hadoop.hdfs.server.datanode.DataNode",
"XceiverCount" : 1,
"DatanodeNetworkCounts" : [ ],
"Version" : "3.0.0-SNAPSHOT",
"RpcPort" : "50002",
"HttpPort" : null,
"NamenodeAddresses" : 
"{\"localhost\":\"BP-1143576736-127.0.0.1-1435617411301\"}",
"VolumeInfo" : 
"{\"/foo/hadoop/test/data/dn0/vol0/current\":{\"freeSpace\":100856463360,\"errors\":0,\"usedSpace\":8192,\"reservedSpaceForRBW\":0,\"reservedSpace\":0},\"/foo/hadoop/test/data/dn0/vol1/current\":{\"freeSpace\":100856463360,\"errors\":0,\"usedSpace\":8192,\"reservedSpaceForRBW\":0,\"reservedSpace\":0}}",
"ClusterId" : "CID-022d733d-9061-4f14-a990-8a93316ca57d"
  },
{code}

> Expose the stats of IOErrors on each FsVolume through JMX
> -
>
> Key: HDFS-8694
> URL: https://issues.apache.org/jira/browse/HDFS-8694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8694.000.patch
>
>
> Currently, once DataNode hits an {{IOError}} when writing / reading block 
> files, it starts a background {{DiskChecker.checkDirs()}} thread. But if this 
> thread successfully finishes, DN does not record this {{IOError}}. 
> We need one measurement to count all {{IOErrors}} for each volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Auto-Re: [jira] [Updated] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2015-06-29 Thread wsb
您的邮件已收到!谢谢!

  1   2   3   4   >