[jira] [Assigned] (HDFS-12494) libhdfs SIGSEGV in setTLSExceptionStrings

2017-10-04 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HDFS-12494:
-

Assignee: John Zhuge

> libhdfs SIGSEGV in setTLSExceptionStrings
> -
>
> Key: HDFS-12494
> URL: https://issues.apache.org/jira/browse/HDFS-12494
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> libhdfs application crashes when CLASSPATH is set but not set properly. It 
> uses wildcard in this case.
> {noformat}
> $ export CLASSPATH=$(hadoop classpath)
> $ pwd
> /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native
> $ ./test_libhdfs_ops
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x0001052968f7, pid=14147, tid=775
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 
> 1.7.0_79-b15)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode bsd-amd64 
> compressed oops)
> # Problematic frame:
> # C  [libhdfs.0.0.0.dylib+0x38f7]  setTLSExceptionStrings+0x47
> #
> # Core dump written. Default location: /cores/core or core.14147
> #
> # An error report file with more information is saved as:
> # 
> /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14147.log
> #
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> # The crash happened outside the Java Virtual Machine in native code.
> # See problematic frame for where to report the bug.
> #
> Abort trap: 6 (core dumped)
> [jzhuge@jzhuge-MBP native]((be32925fff5...) *+)$ lldb -c /cores/core.14147
> (lldb) target create --core "/cores/core.14147"
> warning: (x86_64) /cores/core.14147 load command 549 LC_SEGMENT_64 has a 
> fileoff + filesize (0x14627f000) that extends beyond the end of the file 
> (0x14627e000), the segment will be truncated to match
> warning: (x86_64) /cores/core.14147 load command 550 LC_SEGMENT_64 has a 
> fileoff (0x14627f000) that extends beyond the end of the file (0x14627e000), 
> ignoring this section
> Core file '/cores/core.14147' (x86_64) was loaded.
> (lldb) bt
> * thread #1, stop reason = signal SIGSTOP
>   * frame #0: 0x7fffcf89ad42 libsystem_kernel.dylib`__pthread_kill + 10
> frame #1: 0x7fffcf988457 libsystem_pthread.dylib`pthread_kill + 90
> frame #2: 0x7fffcf800420 libsystem_c.dylib`abort + 129
> frame #3: 0x0001056cd5fb libjvm.dylib`os::abort(bool) + 25
> frame #4: 0x0001057d98fc libjvm.dylib`VMError::report_and_die() + 2308
> frame #5: 0x0001056cefb5 libjvm.dylib`JVM_handle_bsd_signal + 1083
> frame #6: 0x7fffcf97bb3a libsystem_platform.dylib`_sigtramp + 26
> frame #7: 0x0001052968f8 
> libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, 
> stackTrace=0x) at jni_helper.c:589 [opt]
> frame #8: 0x0001052954f0 
> libhdfs.0.0.0.dylib`printExceptionAndFreeV(env=0x7ffaff0019e8, 
> exc=0x7ffafec04140, noPrintFlags=, fmt="loadFileSystems", 
> ap=) at exception.c:183 [opt]
> frame #9: 0x0001052956bb 
> libhdfs.0.0.0.dylib`printExceptionAndFree(env=, 
> exc=, noPrintFlags=, fmt=) at 
> exception.c:213 [opt]
> frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] 
> getGlobalJNIEnv at jni_helper.c:463 [opt]
> frame #11: 0x00010529664f libhdfs.0.0.0.dylib`getJNIEnv at 
> jni_helper.c:528 [opt]
> frame #12: 0x0001052975eb 
> libhdfs.0.0.0.dylib`hdfsBuilderConnect(bld=0x7ffafed0) at hdfs.c:693 
> [opt]
> frame #13: 0x00010528be30 test_libhdfs_ops`main(argc=, 
> argv=) at test_libhdfs_ops.c:91 [opt]
> frame #14: 0x7fffcf76c235 libdyld.dylib`start + 1
> (lldb) f 10
> libhdfs.0.0.0.dylib was compiled with optimization - stepping may behave 
> oddly; variables may not be available.
> frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] 
> getGlobalJNIEnv at jni_helper.c:463 [opt]
>460 "org/apache/hadoop/fs/FileSystem",
>461 "loadFileSystems", "()V");
>462if (jthr) {
> -> 463printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
> "loadFileSystems");
>464}
>465}
>466else {
> (lldb) f 7
> frame #7: 0x0001052968f8 
> libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, 
> stackTrace=0x) at jni_helper.c:589 [opt]
>586mutexUnlock();
>587}
>588
> -> 589free(state->lastExceptionRootCause);
>590free(state->lastExceptionStackTrace);
>591state->lastExceptionRootCause 

[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory

2017-10-04 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192441#comment-16192441
 ] 

Yongjun Zhang commented on HDFS-12544:
--

Hi [~manojg],

Thanks for working on this issue. I did a review of rev2, here are my comments:

1. It seems to make sense to include a new field snapshotDiffScopeDir in the 
SnapshotDiffInfo class, and initialize it as the constructor. So the info in 
this class is self-contained.

2. suggest to move the checking
{code}
 if (!this.snapshotDiffAllowSnapRootDescendant) {
{code}
from SnapshotManager%getSnapshottableAncestorDir to its caller, and call 
{{SnapshotManager%getSnapshottableAncestorDir}} or 
{{SnapshotManager%getSnapshottableRoot}} based on the value of this config.

This way  {{SnapshotManager%getSnapshottableAncestorDir}} can still return the 
snapshottable ancestor even if the config is set to false. We might need this 
behavior else where.

3. suggest to remove the method 
{{SnapshotManager%setSnapshotDiffAllowSnapRootDescendant}}, and use the config 
property to pass on the value to the cluster. This means, we need to split the 
tests in TestSnapshotDiffReport into two parts, one with the config with true 
value, the other with false. 

Coding wise, we can put common code in a abstract base class, and introduce two 
new test classes, one for testing allowing descendant, and the other for 
testing not allowing descendant. 

4. Nit. In SnapshotManager.java, change "directories" to "directory" in the 
following text.

 * operation can be run for any descendant directories under a snapshot root



> SnapshotDiff - support diff generation on any snapshot root descendant 
> directory
> 
>
> Key: HDFS-12544
> URL: https://issues.apache.org/jira/browse/HDFS-12544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch
>
>
> {noformat}
> # hdfs snapshotDiff   
> 
> {noformat}
> Using snapshot diff command, we can generate a diff report between any two 
> given snapshots under a snapshot root directory. The command today only 
> accepts the path that is a snapshot root. There are many deployments where 
> the snapshot root is configured at the higher level directory but the diff 
> report needed is only for a specific directory under the snapshot root. In 
> these cases, the diff report can be filtered for changes pertaining to the 
> directory we are interested in. But when the snapshot root directory is very 
> huge, the snapshot diff report generation can take minutes even if we are 
> interested to know the changes only in a small directory. So, it would be 
> highly performant if the diff report calculation can be limited to only the 
> interesting sub-directory of the snapshot root instead of the whole snapshot 
> root.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.008.patch

Fixing checkstyle issues from earlier jenkins run.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, 
> HDFS-11902-HDFS-9806.008.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192408#comment-16192408
 ] 

Hadoop QA commented on HDFS-12567:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 9 unchanged - 0 fixed = 15 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12567 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890458/HDFS-12567.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ec94a228efd 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21527/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21527/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21527/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192404#comment-16192404
 ] 

Hadoop QA commented on HDFS-11902:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
19s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 5 new + 454 unchanged 
- 5 fixed = 459 total (was 459) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 14s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 
14s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 39s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeProvidedImplementation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-11902 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890456/HDFS-11902-HDFS-9806.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 15c6e9b08ed3 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192372#comment-16192372
 ] 

Hadoop QA commented on HDFS-12567:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 9 unchanged - 0 fixed = 15 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12567 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890454/HDFS-12567.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aae505eafa14 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21525/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21525/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12567:
---
Attachment: HDFS-12567.002.patch

Good point, thanks for reviewing Eddy! I had an idea earlier about 
parameterizing this test to have different # nodes/racks, but found that this 
single test was able to reproduce the issue.

> BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes
> --
>
> Key: HDFS-12567
> URL: https://issues.apache.org/jira/browse/HDFS-12567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12567.001.patch, HDFS-12567.002.patch, 
> HDFS-12567.repro.patch
>
>
> Found this while doing some testing on an internal cluster with an unusual 
> setup. We have a rack with ~20 nodes, then a few more with just a few nodes. 
> It would fail to get (# data blocks) datanodes even though there were plenty 
> of DNs on the rack with 20 DNs.
> I managed to reproduce this same issue in a unit test, stack trace like this:
> {noformat}
> java.io.IOException: File /testfile0 could only be written to 5 of the 6 
> required nodes for RS-6-3-1024k. There are 9 datanode(s) running and no 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2083)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:863)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:548)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> This isn't a very critical bug since it's an unusual rack configuration, but 
> it can easily happen during testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192229#comment-16192229
 ] 

Lei (Eddy) Xu commented on HDFS-12567:
--

Thanks for the patch, [~andrew.wang]

One nit: why not mark {{@Before}} {{@After}} for {{setup() / {{teardown()}} in 
the test case?  Also {{cluster.waitActive()}} should be placed before 
{{setErasureCodingPolicy}} ?

+1 pending the change and jenkins.

> BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes
> --
>
> Key: HDFS-12567
> URL: https://issues.apache.org/jira/browse/HDFS-12567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12567.001.patch, HDFS-12567.repro.patch
>
>
> Found this while doing some testing on an internal cluster with an unusual 
> setup. We have a rack with ~20 nodes, then a few more with just a few nodes. 
> It would fail to get (# data blocks) datanodes even though there were plenty 
> of DNs on the rack with 20 DNs.
> I managed to reproduce this same issue in a unit test, stack trace like this:
> {noformat}
> java.io.IOException: File /testfile0 could only be written to 5 of the 6 
> required nodes for RS-6-3-1024k. There are 9 datanode(s) running and no 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2083)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:863)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:548)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> This isn't a very critical bug since it's an unusual rack configuration, but 
> it can easily happen during testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192214#comment-16192214
 ] 

Hadoop QA commented on HDFS-12543:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}152m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.TestStorageContainerManager |
| Timed out junit tests | org.apache.hadoop.ozone.ksm.TestKSMSQLCli |
|   | 

[jira] [Comment Edited] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192211#comment-16192211
 ] 

Virajith Jalaparti edited comment on HDFS-11902 at 10/4/17 11:50 PM:
-

Posting a new patch that in addition to the earlier changes contains the 
following changes:
(a) Merges BlockFormat and BlockProvider as BlockFormat anyway contains most of 
the functionality
(b) Renames BlockFormat as BlockAliasMap; the implementations of BlockFormat 
and the associated configuration parameters are renamed appropriately.

These changes are based on the following feedback from [~ehiggs] and 
[~KasperJanssens]:
bq. "What is the objective of the BlockFormat, BlockProvider and 
BlockFormatProvider classes. BlockFormatProvider implements BlockProvider and 
composes a BlockFormat. A lot of the calls on BlockFormatProvider are just 
dispatched through to the BlockFormat, which makes BlockProvider and 
BlockFormat have very very similar interfaces. It looks like they can all be 
replaced with one interface, largely. They give rise to lots of extra classes, 
like an implementation of BlockFormatProvider for the csv file and an 
implementation of BlockFormat for the csv file, which dispatch to each other 
and look like they can all be done in one interface implementation which will 
stay a lot smaller in scope than most of the other classes and will make it far 
easier to see what’s going on."


was (Author: virajith):
Posting a new patch that in addition to the earlier changes contains the 
following changes:
(a) Merges BlockFormat and BlockProvider as BlockFormat anyway contains most of 
the functionality
(b) Renames BlockFormat as BlockAliasMap; the implementations of BlockFormat 
and the associated configuration parameters are renamed appropriately.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Attachment: HDFS-11902-HDFS-9806.007.patch

Posting a new patch that in addition to the earlier changes contains the 
following changes:
(a) Merges BlockFormat and BlockProvider as BlockFormat anyway contains most of 
the functionality
(b) Renames BlockFormat as BlockAliasMap; the implementations of BlockFormat 
and the associated configuration parameters are renamed appropriately.

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Open  (was: Patch Available)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.

2017-10-04 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11902:
--
Status: Patch Available  (was: Open)

> [READ] Merge BlockFormatProvider and FileRegionProvider.
> 
>
> Key: HDFS-11902
> URL: https://issues.apache.org/jira/browse/HDFS-11902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-11902-HDFS-9806.001.patch, 
> HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, 
> HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, 
> HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch
>
>
> Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform 
> almost the same function on the Namenode and Datanode respectively. This JIRA 
> is to merge them into one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12567:
---
Status: Patch Available  (was: Open)

> BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes
> --
>
> Key: HDFS-12567
> URL: https://issues.apache.org/jira/browse/HDFS-12567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12567.001.patch, HDFS-12567.repro.patch
>
>
> Found this while doing some testing on an internal cluster with an unusual 
> setup. We have a rack with ~20 nodes, then a few more with just a few nodes. 
> It would fail to get (# data blocks) datanodes even though there were plenty 
> of DNs on the rack with 20 DNs.
> I managed to reproduce this same issue in a unit test, stack trace like this:
> {noformat}
> java.io.IOException: File /testfile0 could only be written to 5 of the 6 
> required nodes for RS-6-3-1024k. There are 9 datanode(s) running and no 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2083)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:863)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:548)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> This isn't a very critical bug since it's an unusual rack configuration, but 
> it can easily happen during testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12567:
---
Attachment: HDFS-12567.001.patch

Patch attached.

This basically wraps the current logic with a fallback that removes the 
maxNodesPerRack limit when we fail to place enough racks. I used the earlier 
repro patch as the unit test, which now passes. 

> BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes
> --
>
> Key: HDFS-12567
> URL: https://issues.apache.org/jira/browse/HDFS-12567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12567.001.patch, HDFS-12567.repro.patch
>
>
> Found this while doing some testing on an internal cluster with an unusual 
> setup. We have a rack with ~20 nodes, then a few more with just a few nodes. 
> It would fail to get (# data blocks) datanodes even though there were plenty 
> of DNs on the rack with 20 DNs.
> I managed to reproduce this same issue in a unit test, stack trace like this:
> {noformat}
> java.io.IOException: File /testfile0 could only be written to 5 of the 6 
> required nodes for RS-6-3-1024k. There are 9 datanode(s) running and no 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2083)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:863)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:548)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> This isn't a very critical bug since it's an unusual rack configuration, but 
> it can easily happen during testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12567) BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes

2017-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-12567:
--

Assignee: Andrew Wang

> BlockPlacementPolicyRackFaultTolerant fails with racks with very few nodes
> --
>
> Key: HDFS-12567
> URL: https://issues.apache.org/jira/browse/HDFS-12567
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12567.repro.patch
>
>
> Found this while doing some testing on an internal cluster with an unusual 
> setup. We have a rack with ~20 nodes, then a few more with just a few nodes. 
> It would fail to get (# data blocks) datanodes even though there were plenty 
> of DNs on the rack with 20 DNs.
> I managed to reproduce this same issue in a unit test, stack trace like this:
> {noformat}
> java.io.IOException: File /testfile0 could only be written to 5 of the 6 
> required nodes for RS-6-3-1024k. There are 9 datanode(s) running and no 
> node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2083)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2609)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:863)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:548)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {noformat}
> This isn't a very critical bug since it's an unusual rack configuration, but 
> it can easily happen during testing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2017-10-04 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDFS-12452:
-

Assignee: (was: Hanisha Koneru)

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Critical
>  Labels: flaky-test
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192147#comment-16192147
 ] 

Xiaoyu Yao commented on HDFS-12543:
---

Thanks [~vagarychen] for the update. patch v10 looks pretty good to me. +1 
pending Jenkins.


> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch, 
> HDFS-12543-HDFS-7240.010.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12592) JournalNodeSyncer does not discover newly added JournalNodes

2017-10-04 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-12592:
-

 Summary: JournalNodeSyncer does not discover newly added 
JournalNodes
 Key: HDFS-12592
 URL: https://issues.apache.org/jira/browse/HDFS-12592
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


The JournalNodeSyncer, during initialization, gets the list of other journal 
nodes from the config. All subsequent sync calls are made to these journal 
nodes. If another journal node is added to the cluster or a journal node is 
restarted at a different address, then the running journalNodeSyncers would not 
be aware of the new/ modified journal nodes. 
We would need to update the shared.edits property in all the journal nodes and 
restart them for the syncers to work correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12579) JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to construct servlet Url

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192075#comment-16192075
 ] 

Hadoop QA commented on HDFS-12579:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12579 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890251/HDFS-12579.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b21d412bfd86 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21522/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21522/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12582) Replace HdfsFileStatus constructor with a builder pattern.

2017-10-04 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192060#comment-16192060
 ] 

Chen Liang commented on HDFS-12582:
---

Thanks [~bharatviswa] for working on this! Could you please take a look and 
verify that the test failures are unrelated?

> Replace HdfsFileStatus constructor with a builder pattern.
> --
>
> Key: HDFS-12582
> URL: https://issues.apache.org/jira/browse/HDFS-12582
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12582.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5823) Document async audit logging

2017-10-04 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192043#comment-16192043
 ] 

Yongjun Zhang commented on HDFS-5823:
-

Hi [~daryn],

I have the same question as [~benoyantony], would you please comment on the 
usage and stability of this feature?

Thanks a lot.


> Document async audit logging
> 
>
> Key: HDFS-5823
> URL: https://issues.apache.org/jira/browse/HDFS-5823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> HDFS-5241 added an option for async log4j audit logging.  The option is 
> considered semi-experimental and should be documented in hdfs-defaults.xml 
> after it's stability under stress is proven.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192007#comment-16192007
 ] 

Jonathan Eagles commented on HDFS-12591:


I can't comment regarding the whole of the jira, but it looks like there is a 
memory leak introduced by calling DBIterator.iterator() without calling close. 
Similar to work done in YARN-5368

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191995#comment-16191995
 ] 

Hadoop QA commented on HDFS-12591:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12591 does not apply to HDFS-9806. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12591 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890432/HDFS-12591-HDFS-9806.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21524/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12591:
--
Attachment: HDFS-12591-HDFS-9806.001.patch

Attaching work I've previously done for this. It needs to be rebased onto the 
HDFS-9806 branch.

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-12591:
--
Status: Patch Available  (was: Open)

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-12591:
-

 Summary: [READ] Implement LevelDBFileRegionFormat
 Key: HDFS-12591
 URL: https://issues.apache.org/jira/browse/HDFS-12591
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Ewan Higgs
Priority: Minor


The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
read from a csv file. This is good for testability and diagnostic purposes, but 
it is not very efficient for larger systems.

There should be a version that is similar to the {{TextFileRegionFormat}} that 
instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12543:
--
Attachment: HDFS-12543-HDFS-7240.010.patch

Thanks [~nandakumar131] for the review and the comments! Post v010 patch to 
rebase. Also addressed [~nandakumar131]'s comments, all the other comments are 
addressed. 
 
bq. I understand the optimization done with key size, but if we are going to 
remove it later why depend on it now?

In general case, client opens a key, then start writing to block. So in 
original design, when a key is opened, a single "pre-allocated block" is also 
allocated to it, such that client does not need to issue another allocate block 
call after open key.

But turns out, the tricky part is that, it is not that clear how many of such 
"pre-allocated" blocks should actually be allocated. It could be 0, when client 
tries to write an empty data array (as in some test cases). It could be some X 
> 1 blocks, if client already knows more than one will be written. So this is 
size is used as a "hint" from client to tell how many such "pre-allocated" 
blocks should be allocated. If client is about to write 0 length data, or 
client does not know how much will be written, it sets the hint as 0 and no 
"pre-allocated" happens at open key. This makes "size" only as an optimization 
that is optional.

I would consider "pre-allocated" blocks as potentially very helpful 
optimization. So I'm not entirely sure whether key size here is completely 
redundant and should be removed. That's why I wanted to resolve this in another 
JIRA, and follow up when we have a better idea of how useful this turns out to 
be, or when we have a better approach to do this.


> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch, 
> HDFS-12543-HDFS-7240.010.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12579) JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to construct servlet Url

2017-10-04 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12579:
--
Status: Patch Available  (was: Open)

> JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to 
> construct servlet Url
> --
>
> Key: HDFS-12579
> URL: https://issues.apache.org/jira/browse/HDFS-12579
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12579.001.patch
>
>
> Currently in JournalNodeSyncer, we construct the remote JN http server url 
> using the JN host address and the http port that we get from the 
> {{GetEditLogManifestResponseProto}}.
> {code}
>   if (remoteJNproxy.httpServerUrl == null) {
> remoteJNproxy.httpServerUrl = getHttpServerURI("http",
> remoteJNproxy.jnAddr.getHostName(), response.getHttpPort());
>   }
> {code}
> The correct way would be to get the http server url of the remote JN from the 
> {{fromUrl}} field of the {{GetEditLogManifestResponseProto}}. This would take 
> care of the http policy set on the remote JN as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191806#comment-16191806
 ] 

Chen Liang commented on HDFS-12543:
---

[~xyao], working on it, will submit a patch later today. Thanks!

> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191799#comment-16191799
 ] 

Xiaoyu Yao commented on HDFS-12543:
---

[~vagarychen], can you rebase the patch to latest HDFS-7240 and address 
[~nandakumar131]'s comments?

> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI

2017-10-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12557:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~cheersyang] Thanks for flagging this issue. [~elek] Thanks for the fix. I 
have committed this to the feature branch.

> Ozone: Improve the formatting of the RPC stats on web UI
> 
>
> Key: HDFS-12557
> URL: https://issues.apache.org/jira/browse/HDFS-12557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: after.png, before.png, HDFS-12557-HDFS-7240.patch
>
>
> During HDFS-12477 [~cheersyang] suggested to improve the formatting of the 
> rpcmetrics in the KSM/SMC web ui:
> https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816
> {quote}
> One more thing, it seems we have too much accuracy here
> Metric name   Number of ops   Average time
> RpcQueueTime  300 0.167019333
> RpcProcessingTime 300 6.5403023
> maybe 0.167 and 6.540 is enough? And what is the unit of the average time, 
> can we add the unit in the header column? {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI

2017-10-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191709#comment-16191709
 ] 

Anu Engineer commented on HDFS-12557:
-

+1, I will commit this shortly.


> Ozone: Improve the formatting of the RPC stats on web UI
> 
>
> Key: HDFS-12557
> URL: https://issues.apache.org/jira/browse/HDFS-12557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: after.png, before.png, HDFS-12557-HDFS-7240.patch
>
>
> During HDFS-12477 [~cheersyang] suggested to improve the formatting of the 
> rpcmetrics in the KSM/SMC web ui:
> https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816
> {quote}
> One more thing, it seems we have too much accuracy here
> Metric name   Number of ops   Average time
> RpcQueueTime  300 0.167019333
> RpcProcessingTime 300 6.5403023
> maybe 0.167 and 6.540 is enough? And what is the unit of the average time, 
> can we add the unit in the header column? {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191693#comment-16191693
 ] 

Anu Engineer commented on HDFS-12583:
-

[~vagarychen], [~cheersyang], [~linyiqun] Thanks for a very interesting 
discussion.

I am of the opinion that even if we carry the full exception to the client, it 
still has to be encapsulated inside the error model that we support. Here is 
the issue with exceptions, they don't work well across process boundaries. It 
assumes that the clients are always in Java or the fact that clients can 
understand what these exceptions mean. If we start leaking these server-side 
abstractions over RPC, then writing clients in any language other than java 
becomes harder.  So in the Ozone world, it is imperative that we keep the 
current model, so we can write ozone clients in other languages like C++ or 
python without hard-coded java language features.

Given that, if you want to send an additional string which other languages can 
ignore and Java clients have a special meaning, I am ok with that (but I would 
like to avoid) since in my mind RPC protocol is a boundary where the Server 
abstractions don't leak over but are always clearly defined via protobuf. In a 
perfect world, (I am not asking us to do this) the way to send a Java exception 
on the wire, would be to create a protobuf struct called java exception and 
send that stack over and those various clients can decode. However, in a 
practical sense, no client (not even Java) clients decode the stack, instead, 
they just print it to log/screen. So it be might something you can add as an 
inner exception and send across as a string if it helps.


> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12100) Ozone: KSM: Allocate key should honour volume quota if quota is set on the volume

2017-10-04 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12100:
-
Status: In Progress  (was: Patch Available)

> Ozone: KSM: Allocate key should honour volume quota if quota is set on the 
> volume
> -
>
> Key: HDFS-12100
> URL: https://issues.apache.org/jira/browse/HDFS-12100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12100-HDFS-7240.001.patch, 
> HDFS-12100-HDFS-7240.002.patch, HDFS-12100-HDFS-7240.003.patch
>
>
> KeyManagerImpl#allocateKey currently does not check the volume quota before 
> allocating a key, this can cause the volume quota overrun.
> Volume quota needs to be check before allocating the key in the SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself

2017-10-04 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12038:
-
Status: In Progress  (was: Patch Available)

> Ozone: Non-admin user is unable to run InfoVolume to the volume owned by 
> itself
> ---
>
> Key: HDFS-12038
> URL: https://issues.apache.org/jira/browse/HDFS-12038
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
>  Labels: OzonePostMerge
> Attachments: HDFS-12038-HDFS-7240.001.patch
>
>
> Reproduce steps
> 1. Create a volume with a non-admin user
> {code}
> hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user 
> wwei -root -quota 2TB
> {code}
> 2. Run infoVolume command to get this volume info
> {noformat}
> hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei
> Command Failed : 
> {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing
>  authorization or authorization has to be 
> unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"}
> {noformat}
> add {{-root}} to run as admin user could bypass this issue 
> {noformat}
> hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei 
> -root
> {
>   "owner" : {
> "name" : "wwei"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 2
>   },
>   "volumeName" : "volume-wwei-0",
>   "createdOn" : null,
>   "createdBy" : "hdfs"
> }
> {noformat}
> expecting: both volume owner and admin should be able to run infoVolume 
> command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12387:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~vagarychen], [~nandakumar131], [~xyao] Thanks for the comments and reviews. I 
have committed this to the feature branch.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191603#comment-16191603
 ] 

Xiaoyu Yao commented on HDFS-12387:
---

Thanks for the update. +1 given the test issue is understood.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-04 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191600#comment-16191600
 ] 

Chen Liang commented on HDFS-12583:
---

Thanks [~cheersyang], [~linyiqun] for the follow-up! I'm under the impression 
that, if we are to pass the full exception stack back to client, there is 
probably no need to even have the error table and error code. The idea of 
having error code, as it appears to me, seems to have only several categories 
of exceptions that client side understands, one reason for this might be, as 
mentioned by Yiqun, to simplify json parsing things. Not saying this is the 
best way though, either encapsulating the original exception or just error 
message with error code looks good to me. But I feel like if we want to carry 
the original exception back to client, we might need some major refactoring on 
error handling.

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12590) datanode process running in dead state for over 24 hours

2017-10-04 Thread Dhaval Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dhaval Patel updated HDFS-12590:

Affects Version/s: 2.7.3
  Environment: RHEL7
  Description: 
{code:java}
2017-10-02 14:04:44,862 INFO  datanode.DataNode (BPServiceActor.java:run(733)) 
- Block pool  (Datanode Uuid unassigned) service to 
master5.xx.local/10.10.10.10:8020 starting to offer serv
ice
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(1045)) - IPC Server 
Responder: starting
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(881)) - IPC Server 
listener on 8010: starting
2017-10-02 14:04:45,066 INFO  common.Storage 
(DataStorage.java:getParallelVolumeLoadThreadsNum(384)) - Using 2 threads to 
upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, 
dataDirs=
2)
2017-10-03 14:06:10,525 ERROR common.Storage (Storage.java:tryLock(783)) - 
Failed to acquire lock on /data1/hadoop/hdfs/data/in_use.lock. If this storage 
directory is mounted via NFS, ensure that the appropr
iate nfs lock services are running.
java.io.IOException: Resource temporarily unavailable
at java.io.RandomAccessFile.writeBytes(Native Method)
at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
at java.lang.Thread.run(Thread.java:745)
2017-10-03 14:06:10,542 WARN  common.Storage 
(DataStorage.java:loadDataStorage(449)) - Failed to add storage directory 
[DISK]file:/data1/hadoop/hdfs/data/
java.io.IOException: Resource temporarily unavailable
at java.io.RandomAccessFile.writeBytes(Native Method)
at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
at java.lang.Thread.run(Thread.java:745)
2017-10-03 18:03:16,928 ERROR datanode.DataNode (LogAdapter.java:error(71)) - 
RECEIVED SIGNAL 15: SIGTERM
2017-10-03 18:03:16,934 INFO  datanode.DataNode (LogAdapter.java:info(47)) - 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at dc-slave29.xx.local/10.10.10.10
/
2017-10-03 18:03:23,093 INFO  datanode.DataNode (LogAdapter.java:info(47)) - 
STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = xx-slave29.xx.local/10.10.10.10
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3.2.5.3.0-37
 
{code}


  

[jira] [Created] (HDFS-12590) datanode process running in dead state for over 24 hours

2017-10-04 Thread Dhaval Patel (JIRA)
Dhaval Patel created HDFS-12590:
---

 Summary: datanode process running in dead state for over 24 hours
 Key: HDFS-12590
 URL: https://issues.apache.org/jira/browse/HDFS-12590
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Dhaval Patel



{code:java}
2017-10-02 14:04:44,862 INFO  datanode.DataNode (BPServiceActor.java:run(733)) 
- Block pool  (Datanode Uuid unassigned) service to 
master5.xx.local/10.10.10.10:8020 starting to offer serv
ice
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(1045)) - IPC Server 
Responder: starting
2017-10-02 14:04:44,867 INFO  ipc.Server (Server.java:run(881)) - IPC Server 
listener on 8010: starting
2017-10-02 14:04:45,066 INFO  common.Storage 
(DataStorage.java:getParallelVolumeLoadThreadsNum(384)) - Using 2 threads to 
upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, 
dataDirs=
2)
2017-10-03 14:06:10,525 ERROR common.Storage (Storage.java:tryLock(783)) - 
Failed to acquire lock on /data1/hadoop/hdfs/data/in_use.lock. If this storage 
directory is mounted via NFS, ensure that the appropr
iate nfs lock services are running.
java.io.IOException: Resource temporarily unavailable
at java.io.RandomAccessFile.writeBytes(Native Method)
at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
at java.lang.Thread.run(Thread.java:745)
2017-10-03 14:06:10,542 WARN  common.Storage 
(DataStorage.java:loadDataStorage(449)) - Failed to add storage directory 
[DISK]file:/data1/hadoop/hdfs/data/
java.io.IOException: Resource temporarily unavailable
at java.io.RandomAccessFile.writeBytes(Native Method)
at java.io.RandomAccessFile.write(RandomAccessFile.java:512)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:773)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:736)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:549)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:299)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:438)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:417)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:595)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1483)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1448)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:319)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:267)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:740)
at java.lang.Thread.run(Thread.java:745)
2017-10-03 18:03:16,928 ERROR datanode.DataNode (LogAdapter.java:error(71)) - 
RECEIVED SIGNAL 15: SIGTERM
2017-10-03 18:03:16,934 INFO  datanode.DataNode (LogAdapter.java:info(47)) - 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at dc-slave29.xx.local/10.10.10.10
/
2017-10-03 18:03:23,093 INFO  datanode.DataNode (LogAdapter.java:info(47)) - 
STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = xx-slave29.xx.local/10.10.10.10

[jira] [Commented] (HDFS-12537) Ozone: Reduce key creation overhead in Corona

2017-10-04 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191549#comment-16191549
 ] 

Nandakumar commented on HDFS-12537:
---

Thanks [~ljain] for updating the patch.

In *Corona.java*

If {{threadThroughput}} is just for holding the thread's throughput, you can 
use a simple list and make it thread safe with 
{{Collections.synchronizedList()}} instead of using {{BlockingQueue}}

Line: 325
This 
{code}
jsonDir = cmdLine.hasOption(JSON_WRITE_DIRECTORY) ?
cmdLine.getOptionValue(JSON_WRITE_DIRECTORY) : null;
{code}
can be replaced with
{code}
jsonDir = cmdLine.getOptionValue(JSON_WRITE_DIRECTORY);
{code}
{{CommandLine#getOptionValue}} returns the Value of argument if option is set, 
and has an argument, otherwise null.

Line 412: There is no need to assign {{keyValue}} to the local variable 
{{value}}, keyValue can be directly used in {{os.write(keyValue)}}

Line 428: Incomplete value is added for validation, this will cause the 
validation of writes to fail. Since the size of value can be huge we can use 
checksum to optimize data validation, this can be done in follow up jira. For 
now you can add the complete value for validation.



> Ozone: Reduce key creation overhead in Corona
> -
>
> Key: HDFS-12537
> URL: https://issues.apache.org/jira/browse/HDFS-12537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12537-HDFS-7240.001.patch, 
> HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch
>
>
> Currently Corona creates random key values for each key. This creates a lot 
> of overhead. An option should be provided to use a single key value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191448#comment-16191448
 ] 

Anu Engineer edited comment on HDFS-12387 at 10/4/17 4:38 PM:
--

{{hadoop.ozone.TestOzoneConfigurationFields}} Has been failing for a while. It 
is missing two keys, fs.ozfs.impl and ozone.client.cache. [~msingh] is fixing 
it via another patch.

{{hadoop.ozone.ksm.TestMultipleContainerReadWrite}} there is a count mismatch 
in the assertion, I am not sure it is related to this patch, but I will 
investigate.

{{hadoop.ozone.ksm.TestMultipleContainerReadWrite}}, the storage counters have 
a mismatch, but the data is written and read back and verified it is correct. 
So I can file a JIRA to track that issue. 


was (Author: anu):
{{hadoop.ozone.TestOzoneConfigurationFields}} Has been failing for a while. It 
is missing two keys, fs.ozfs.impl and ozone.client.cache. [~msingh] is fixing 
it via another patch.

{{hadoop.ozone.ksm.TestMultipleContainerReadWrite}} there is a count mismatch 
in the assertion, I am not sure it is related to this patch, but I will 
investigate.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-04 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12578:
--
Status: Open  (was: Patch Available)

> TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
> 
>
> Key: HDFS-12578
> URL: https://issues.apache.org/jira/browse/HDFS-12578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDFS-12578-branch-2.7.001.patch
>
>
> It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently 
> failing in branch-2.7. We should investigate and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-04 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12578:
--
Status: Patch Available  (was: Open)

> TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
> 
>
> Key: HDFS-12578
> URL: https://issues.apache.org/jira/browse/HDFS-12578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDFS-12578-branch-2.7.001.patch
>
>
> It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently 
> failing in branch-2.7. We should investigate and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191448#comment-16191448
 ] 

Anu Engineer commented on HDFS-12387:
-

{{hadoop.ozone.TestOzoneConfigurationFields}} Has been failing for a while. It 
is missing two keys, fs.ozfs.impl and ozone.client.cache. [~msingh] is fixing 
it via another patch.

{{hadoop.ozone.ksm.TestMultipleContainerReadWrite}} there is a count mismatch 
in the assertion, I am not sure it is related to this patch, but I will 
investigate.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191436#comment-16191436
 ] 

Xiaoyu Yao commented on HDFS-12387:
---

[~anu], thanks for the update. These two unit test failures are related and 
reproed locally on my machine. Can you take a look?

{code}
hadoop.ozone.ksm.TestMultipleContainerReadWrite
hadoop.ozone.TestOzoneConfigurationFields
{code}

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191404#comment-16191404
 ] 

Hadoop QA commented on HDFS-12557:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
52m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12557 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890360/HDFS-12557-HDFS-7240.patch
 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 1a4c7e933819 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c0387ab |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21521/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Improve the formatting of the RPC stats on web UI
> 
>
> Key: HDFS-12557
> URL: https://issues.apache.org/jira/browse/HDFS-12557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: after.png, before.png, HDFS-12557-HDFS-7240.patch
>
>
> During HDFS-12477 [~cheersyang] suggested to improve the formatting of the 
> rpcmetrics in the KSM/SMC web ui:
> https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816
> {quote}
> One more thing, it seems we have too much accuracy here
> Metric name   Number of ops   Average time
> RpcQueueTime  300 0.167019333
> RpcProcessingTime 300 6.5403023
> maybe 0.167 and 6.540 is enough? And what is the unit of the average time, 
> can we add the unit in the header column? {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-04 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191400#comment-16191400
 ] 

Nandakumar commented on HDFS-12588:
---

Thanks [~elek] for reporting this and working on it.
The patch is not applying against latest HDFS-7240 branch. Can you please 
rebase and update the patch.

> Use GenericOptionsParser for scm and ksm daemon
> ---
>
> Key: HDFS-12588
> URL: https://issues.apache.org/jira/browse/HDFS-12588
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12588-HDFS-7240.001.patch
>
>
> Most of the hadoop commands use the GenericOptionsParser to use some common 
> CLI arguments (such as -conf or -D or -libjars to define configuration/modify 
> configuration/modify classpath).
> I suggest to use the same common options to scm and ksm daemons as well, as:
> 1. It allows to use the existing cluster management tools/scripts as the 
> daemons could be configured in the same way as namenode and datanode
> 2. It follows the convention from the hadoop common.
> 3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
> from intellij but I need to add the configuration to the classpath. With 
> -conf I would able to use external configration.)
> I found one problem during the implementation. Until now we used `hdfs scm` 
> command both for the daemon and the scm command line client. If there were no 
> parameters the daemon is started, with parameters the cli is started. The 
> help listed only the damon.
> The -conf (GenericOptionParser) could be used only if we separate the scm and 
> scmcli commands. But any way, it's a more clean and visible if we have 
> separated `hdfs scm` and `hdfs scmcli`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-04 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191348#comment-16191348
 ] 

Nandakumar commented on HDFS-12543:
---

Thanks [~vagarychen] for working on this, the patch looks pretty good to me.

Few comments:

bq.  It might be better to remove size completely from {{KsmKeyArgs}} I would 
like to leave it as a separate JIRA after this is done.

If the plan is to remove size from {{KsmKeyArgs}}, can we also remove the 
dependency of size from {{KeyManagerImpl#openKey}}. I understand the 
optimization done with key size, but if we are going to remove it later why 
depend on it now? I don't have any cleaner approach to suggest for optimizing 
block allocation for big keys.

*KeyManagerImpl.java*

{{commitKey}}:  we have to update actual size of the key during commit.

Line 201: Even for a 0 length key we are initially updating the size as 
{{scmBlockSize}} which is not necessary, we can set the size to 0.

Line 137: The log message can be changed, we are not actually committing the 
key here.

*KeySpaceManager.java*

Line 497: {{metrics.incNumBlockAllocates()}} this doesn't give us the actual 
number of blocks allocated, since we are also allocating blocks as part of 
{{openKey}}. Can we change this metrics name from {{numBlockAllocate}} to 
{{numAllocateBlockCalls}}?


> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI

2017-10-04 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12557:

Status: Patch Available  (was: Open)

> Ozone: Improve the formatting of the RPC stats on web UI
> 
>
> Key: HDFS-12557
> URL: https://issues.apache.org/jira/browse/HDFS-12557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: after.png, before.png, HDFS-12557-HDFS-7240.patch
>
>
> During HDFS-12477 [~cheersyang] suggested to improve the formatting of the 
> rpcmetrics in the KSM/SMC web ui:
> https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816
> {quote}
> One more thing, it seems we have too much accuracy here
> Metric name   Number of ops   Average time
> RpcQueueTime  300 0.167019333
> RpcProcessingTime 300 6.5403023
> maybe 0.167 and 6.540 is enough? And what is the unit of the average time, 
> can we add the unit in the header column? {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI

2017-10-04 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12557:

Attachment: HDFS-12557-HDFS-7240.patch
before.png
after.png

Patch has been attached:

1. only the last two fraction digits are displayed (5.43 instead of 5.23423)
2. ms is added to the table header (as average time is measured in ms)
3. the digit of the "number of ops" is displayed with thousands separator 
(according to the default locale) 

To test:
Do a full build, start ozone cluster and check scm or ksm ui (the rpc metrics 
logic is shared)

I also attached screenshots about the changed area.

> Ozone: Improve the formatting of the RPC stats on web UI
> 
>
> Key: HDFS-12557
> URL: https://issues.apache.org/jira/browse/HDFS-12557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: after.png, before.png, HDFS-12557-HDFS-7240.patch
>
>
> During HDFS-12477 [~cheersyang] suggested to improve the formatting of the 
> rpcmetrics in the KSM/SMC web ui:
> https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816
> {quote}
> One more thing, it seems we have too much accuracy here
> Metric name   Number of ops   Average time
> RpcQueueTime  300 0.167019333
> RpcProcessingTime 300 6.5403023
> maybe 0.167 and 6.540 is enough? And what is the unit of the average time, 
> can we add the unit in the header column? {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-10-04 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191198#comment-16191198
 ] 

Elek, Marton commented on HDFS-12469:
-

[~anu] Let me know what was the problem with the scale, and I will check it. I 
tried it out with the latest ozone with `docker-compose scale datanode=3` and 
it worked well. All the new datanodes appeared in the namenode ui.

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.001.patch, 
> HDFS-12469-HDFS-7240.002.patch, HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191189#comment-16191189
 ] 

Hadoop QA commented on HDFS-12587:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 6 unchanged - 2 fixed = 6 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}112m 
16s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890327/HDFS-12587.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ad5c5a39bbe 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / acf5b88 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21519/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21519/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to apply multiple EC policies
> 

[jira] [Commented] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191185#comment-16191185
 ] 

Hadoop QA commented on HDFS-12588:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890329/HDFS-12588-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux a4a089964fee 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c0387ab |
| 

[jira] [Updated] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-04 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12588:

Attachment: HDFS-12588-HDFS-7240.001.patch

Test scenarion:

1. Do a full dist build
2. Create ozone-site.xml and try to start the ozone cluster as previously. It 
should work.
3. move out ozone-site.xml from /etc/hadoop to  /tmp and try to start the 
daemon with external configuration: `hdfs scm -conf /tmp/ozone-site.xml 
4. try to run `hdfs scm -help` or `hdfs ksm -help`. The generic options should 
be displayed.

> Use GenericOptionsParser for scm and ksm daemon
> ---
>
> Key: HDFS-12588
> URL: https://issues.apache.org/jira/browse/HDFS-12588
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12588-HDFS-7240.001.patch
>
>
> Most of the hadoop commands use the GenericOptionsParser to use some common 
> CLI arguments (such as -conf or -D or -libjars to define configuration/modify 
> configuration/modify classpath).
> I suggest to use the same common options to scm and ksm daemons as well, as:
> 1. It allows to use the existing cluster management tools/scripts as the 
> daemons could be configured in the same way as namenode and datanode
> 2. It follows the convention from the hadoop common.
> 3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
> from intellij but I need to add the configuration to the classpath. With 
> -conf I would able to use external configration.)
> I found one problem during the implementation. Until now we used `hdfs scm` 
> command both for the daemon and the scm command line client. If there were no 
> parameters the daemon is started, with parameters the cli is started. The 
> help listed only the damon.
> The -conf (GenericOptionParser) could be used only if we separate the scm and 
> scmcli commands. But any way, it's a more clean and visible if we have 
> separated `hdfs scm` and `hdfs scmcli`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-04 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12588:

Status: Patch Available  (was: Open)

> Use GenericOptionsParser for scm and ksm daemon
> ---
>
> Key: HDFS-12588
> URL: https://issues.apache.org/jira/browse/HDFS-12588
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12588-HDFS-7240.001.patch
>
>
> Most of the hadoop commands use the GenericOptionsParser to use some common 
> CLI arguments (such as -conf or -D or -libjars to define configuration/modify 
> configuration/modify classpath).
> I suggest to use the same common options to scm and ksm daemons as well, as:
> 1. It allows to use the existing cluster management tools/scripts as the 
> daemons could be configured in the same way as namenode and datanode
> 2. It follows the convention from the hadoop common.
> 3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
> from intellij but I need to add the configuration to the classpath. With 
> -conf I would able to use external configration.)
> I found one problem during the implementation. Until now we used `hdfs scm` 
> command both for the daemon and the scm command line client. If there were no 
> parameters the daemon is started, with parameters the cli is started. The 
> help listed only the damon.
> The -conf (GenericOptionParser) could be used only if we separate the scm and 
> scmcli commands. But any way, it's a more clean and visible if we have 
> separated `hdfs scm` and `hdfs scmcli`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12587:

Attachment: HDFS-12587.1.patch

Uploaded the 1st patch.

> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to apply multiple EC policies
> --
>
> Key: HDFS-12587
> URL: https://issues.apache.org/jira/browse/HDFS-12587
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12587.1.patch
>
>
> This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
> {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
> policies with Parameterized tests in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12587:

Status: Patch Available  (was: Open)

> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to apply multiple EC policies
> --
>
> Key: HDFS-12587
> URL: https://issues.apache.org/jira/browse/HDFS-12587
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12587.1.patch
>
>
> This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
> {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
> policies with Parameterized tests in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12589) [DISCUSS] Provided Storage BlockAlias Refactoring

2017-10-04 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191054#comment-16191054
 ] 

Ewan Higgs commented on HDFS-12589:
---

Some discussion happened off jira; but we'd much prefer these discussions to be 
in the open and tracked:

>From [~ehiggs]
{quote}
Regarding the BlockAlias, we suggested that we get rid of it since the 
interface is insufficient to work with and it’s not clear how it should be used 
to e.g. dispatch writes to the correct ProvidedVolumeImpl. We proposed to 
replace this with having new styles of retrieving data use their own URI scheme 
(e.g. myformat://). Also, if there are other requirements, it could potentially 
be held in an extra byte[] in the FileRegion to hold extra information that a 
custom ProvidedVolumeImpl could use.
{quote}

>From [~chris.douglas]:
{quote}
There’s a long tradition of stuffing metadata into URIs, so I won’t argue that 
this restricts possible implementations. As we discussed during the call, if 
there are a set of possible providers, an alias doesn’t contain enough 
information to dispatch among them. Since Hadoop already has a mechanism, 
kludgy as it may be, for looking up different FileSystems based on Path/URIs, 
we could use the existing scheme/authority/principal cache instead of layering 
another layer of indirection on top of it.
 
I’ll outline my reservations. The existing object store “FileSystem” 
implementations already manage some impedance mismatches, translating 
hierarchical operations into those stores. Moreover, the layers people are 
adding to HDFS in HBase, Hive/LLAP, etc. are working around the namesystem, 
mostly treating HDFS as if it were a (not particularly good) object store. If 
we make everything into a FileRegion, we’re baking in the FileSystem coupling 
between the HDFS block layer and the provided store. We’re baking in its 
versatility- which is likely sufficient- but also its disadvantages.
 
For example, there are no good batch APIs to FileSystem. There are no 
reasonable async APIs, and the ones being built have no consistency guarantees. 
We’ve been trying to introduce an API providing the most basic consistency 
guarantee, and that’s taken a year of negotiation and prototyping.
 
Thomas/Ewan, you guys are more familiar with the limitations of S3Guard than I 
am. If those won’t materially affect the implementation of future provided 
stores (or those invariants are useful to their implementation) then I won’t 
insist on an abstraction that only gets in the way of implementation. -C
{quote}

> [DISCUSS] Provided Storage BlockAlias Refactoring
> -
>
> Key: HDFS-12589
> URL: https://issues.apache.org/jira/browse/HDFS-12589
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
>
> A BlockAlias is an interface used by the Datanode to determine where to 
> retrieve data from. It currently has a single implementation: {{FileRegion}} 
> which contains the Block, BlockPoolID, Provided URL for the FileRegion (i.e. 
> block); and length and offset of the FileRegion in the remote storage.
> The BlockAlias currently has a single method: {{getBlock}}. This is not 
> particularly useful since we can't ask it meaningful questions like 'how do 
> we retrieve the data from the external storage system?'. Or 'is the version 
> of the block in the external storage system up to data?'. Either we can do 
> away with the BlockAlias altogether and work with FileRegion, or the 
> BlockAlias needs to be made more robust.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12589) [DISCUSS] Provided Storage BlockAlias Refactoring

2017-10-04 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-12589:
-

 Summary: [DISCUSS] Provided Storage BlockAlias Refactoring
 Key: HDFS-12589
 URL: https://issues.apache.org/jira/browse/HDFS-12589
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Ewan Higgs
Priority: Minor


A BlockAlias is an interface used by the Datanode to determine where to 
retrieve data from. It currently has a single implementation: {{FileRegion}} 
which contains the Block, BlockPoolID, Provided URL for the FileRegion (i.e. 
block); and length and offset of the FileRegion in the remote storage.

The BlockAlias currently has a single method: {{getBlock}}. This is not 
particularly useful since we can't ask it meaningful questions like 'how do we 
retrieve the data from the external storage system?'. Or 'is the version of the 
block in the external storage system up to data?'. Either we can do away with 
the BlockAlias altogether and work with FileRegion, or the BlockAlias needs to 
be made more robust.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12588) Use GenericOptionsParser for scm and ksm daemon

2017-10-04 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12588:
---

 Summary: Use GenericOptionsParser for scm and ksm daemon
 Key: HDFS-12588
 URL: https://issues.apache.org/jira/browse/HDFS-12588
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


Most of the hadoop commands use the GenericOptionsParser to use some common CLI 
arguments (such as -conf or -D or -libjars to define configuration/modify 
configuration/modify classpath).

I suggest to use the same common options to scm and ksm daemons as well, as:

1. It allows to use the existing cluster management tools/scripts as the 
daemons could be configured in the same way as namenode and datanode
2. It follows the convention from the hadoop common.
3. It's easier to develop from the IDE (I start the ksm/scm/datanode/namenode 
from intellij but I need to add the configuration to the classpath. With -conf 
I would able to use external configration.)


I found one problem during the implementation. Until now we used `hdfs scm` 
command both for the daemon and the scm command line client. If there were no 
parameters the daemon is started, with parameters the cli is started. The help 
listed only the damon.

The -conf (GenericOptionParser) could be used only if we separate the scm and 
scmcli commands. But any way, it's a more clean and visible if we have 
separated `hdfs scm` and `hdfs scmcli`.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12587:

Description: This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} 
and {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
policies with Parameterized tests in each time is not a big impact.  (was: This 
is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
{{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec policies 
in each time is not a big impact.)

> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to apply multiple EC policies
> --
>
> Key: HDFS-12587
> URL: https://issues.apache.org/jira/browse/HDFS-12587
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
>
> This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
> {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
> policies with Parameterized tests in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12587) Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12587:

Summary: Use Parameterized tests in TestBlockInfoStriped and 
TestLowRedundancyBlockQueues to apply multiple EC policies  (was: Use 
parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to 
apply multiple EC policies)

> Use Parameterized tests in TestBlockInfoStriped and 
> TestLowRedundancyBlockQueues to apply multiple EC policies
> --
>
> Key: HDFS-12587
> URL: https://issues.apache.org/jira/browse/HDFS-12587
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
>
> This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
> {{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec 
> policies in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12587) Use parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to apply multiple EC policies

2017-10-04 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-12587:
---

 Summary: Use parameterized tests in TestBlockInfoStriped and 
TestLowRedundancyBlockQueues to apply multiple EC policies
 Key: HDFS-12587
 URL: https://issues.apache.org/jira/browse/HDFS-12587
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding, test
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


This is a subtask of HDFS-9962. Since {{TestBlockInfoStriped}} and 
{{TestLowRedundancyBlockQueues}} don't use minicluster, testing all ec policies 
in each time is not a big impact.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself

2017-10-04 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190992#comment-16190992
 ] 

Nandakumar commented on HDFS-12038:
---

Thanks [~anu] for the ping, and thanks [~ljain] for taking this up and working 
on it.

As of KSM's current state we don't have any authorization mechanism in place, 
i.e we don't do authorization on any client calls. Authorization of 
createVolume calls are done in OzoneHandler's {{VolumeHandler}} (datanode REST 
server), this is not an ideal place to do it as RPC clients will bypass this.
We have to authorize all the calls made to KSM in {{KeySpaceManager}}, which 
can be done in another jira.

For this issue we should properly set {{client.setUserAuth(userName)}} which is 
not happening in first place; If {{-root}} is not specified we are setting 
UserAuth as null  and HTTP header {{Authorization}} is not set in the HttpGet 
request which is causing the issue.
As pointed out by [~cheersyang], we have to remove line 89 
{code}
client.setUserAuth(rootName);
{code}
Additionally we can add logic in 
{{VolumeProcessTemplate#getVolumeInfoResponse}} to check if the user is admin 
or owner of the volume, with this we can make sure that unauthorized user 
doesn't have access to InfoVolume calls. Still with RPC client anyone can make 
any calls.  



> Ozone: Non-admin user is unable to run InfoVolume to the volume owned by 
> itself
> ---
>
> Key: HDFS-12038
> URL: https://issues.apache.org/jira/browse/HDFS-12038
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Lokesh Jain
>  Labels: OzonePostMerge
> Attachments: HDFS-12038-HDFS-7240.001.patch
>
>
> Reproduce steps
> 1. Create a volume with a non-admin user
> {code}
> hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user 
> wwei -root -quota 2TB
> {code}
> 2. Run infoVolume command to get this volume info
> {noformat}
> hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei
> Command Failed : 
> {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing
>  authorization or authorization has to be 
> unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"}
> {noformat}
> add {{-root}} to run as admin user could bypass this issue 
> {noformat}
> hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei 
> -root
> {
>   "owner" : {
> "name" : "wwei"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 2
>   },
>   "volumeName" : "volume-wwei-0",
>   "createdOn" : null,
>   "createdBy" : "hdfs"
> }
> {noformat}
> expecting: both volume owner and admin should be able to run infoVolume 
> command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190971#comment-16190971
 ] 

Hadoop QA commented on HDFS-12387:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
3 unchanged - 1 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.ozone.ksm.TestMultipleContainerReadWrite |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12387 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890289/HDFS-12387-HDFS-7240.007.patch
 |
| Optional Tests |  asflicense  compile  

[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-04 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190968#comment-16190968
 ] 

Yiqun Lin commented on HDFS-12583:
--

Hi [~cheersyang],
bq. would that resolve this problem (client will get full stack trace about the 
original exception)? 
Additionally pass the exception instance to its constructor in 
{{OzoneException}} won't let client get full stack. There still need other 
change. {{OzoneException}} is showed as a json str and re-parsed from response 
content for the client. If we want to keep the full stack, we should add new 
field for stack info in {{OzoneException}}. Full stack info logged in server 
side is also okay for us I think.

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org