[jira] [Commented] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902857#comment-14902857
 ] 

Chris Nauroth commented on HDFS-9046:
-

[~vinayrpet], thanks for the notification.  This one is near the front of my 
review queue, and I'm aiming to look at it later this week.

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9103) Retry reads on DN failure

2015-09-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9103:
-
Attachment: HDFS-9103.HDFS-8707.5.patch

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.3.patch, HDFS-9103.HDFS-8707.4.patch, 
> HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9107) Prevent NN's unrecoverable death spiral after full GC

2015-09-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903077#comment-14903077
 ] 

Colin Patrick McCabe commented on HDFS-9107:


Also (although I don't feel strongly about this), I don't think we need to 
optimize by checking at end of the entire scan for whether to skip the next 
scan.  Long GCs are rare enough that we don't need to optimize the code path... 
just keep it simple.

> Prevent NN's unrecoverable death spiral after full GC
> -
>
> Key: HDFS-9107
> URL: https://issues.apache.org/jira/browse/HDFS-9107
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-9107.patch, HDFS-9107.patch
>
>
> A full GC pause in the NN that exceeds the dead node interval can lead to an 
> infinite cycle of full GCs.  The most common situation that precipitates an 
> unrecoverable state is a network issue that temporarily cuts off multiple 
> racks.
> The NN wakes up and falsely starts marking nodes dead. This bloats the 
> replication queues which increases memory pressure. The replications create a 
> flurry of incremental block reports and a glut of over-replicated blocks.
> The "dead" nodes heartbeat within seconds. The NN forces a re-registration 
> which requires a full block report - more memory pressure. The NN now has to 
> invalidate all the over-replicated blocks. The extra blocks are added to 
> invalidation queues, tracked in an excess blocks map, etc - much more memory 
> pressure.
> All the memory pressure can push the NN into another full GC which repeats 
> the entire cycle.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9062) Add a parameter to MiniDFSCluster to turn off security checks on the domain socked path

2015-09-22 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9062 started by James Clampffer.
-
> Add a parameter to MiniDFSCluster to turn off security checks on the domain 
> socked path
> ---
>
> Key: HDFS-9062
> URL: https://issues.apache.org/jira/browse/HDFS-9062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
>
> I'd like to add a command line parameter that allows the permission checks on 
> dfs.domain.socket.path to be turned off.
> Right now a blocker, or at least major inconvenience, for short circuit 
> reader development is getting the domain socket path set up with the correct 
> permissions.  I'm working on shared test machines where messing with things 
> in /var/lib is discouraged.
> This should also make it easier to write tests for short circuit reads once 
> completed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903282#comment-14903282
 ] 

Hadoop QA commented on HDFS-7529:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 20s | The applied patch generated  2 
new checkstyle issues (total was 354, now 355). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 43s | Tests failed in hadoop-hdfs. |
| | | 208m 23s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761668/HDFS-7529-003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 57003fa |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12598/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12598/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12598/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12598/console |


This message was automatically generated.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529.000.patch, HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903220#comment-14903220
 ] 

Colin Patrick McCabe commented on HDFS-8873:


Thanks, [~templedf].

{code}
416 if ((throttle > 1000) || (throttle <= 0)) {
417   if (throttle > 1000) {
418 LOG.error(
419 
DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_KEY
420 + " set to value above 1000 ms/sec. Assuming default value 
of " +
421 
DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT);
{code}
Can we have a constant here for {{MS_PER_SEC}}?  I think I commented on this 
earlier

{code}
455 if (throttleLimitMsPerSec < 1000) {
456   logMsg = String.format("Periodic Directory Tree Verification scan"
457   + " starting at %dms with interval %dms and run limit %dms/s",
458   firstScanTime, scanPeriodMsecs, throttleLimitMsPerSec);
{code}
Maybe say "throttle" instead of "run limit"?

{code}
766 // Variable for tracking time spent running and waiting for testing
767 // purposes
768 private Long markMs;
{code}
Does this need to be an object, or can it be a primitive?  I don't see any case 
where we need it to be null.

{code}
895   while (nowMs % 1000L > throttleLimitMsPerSec) {
896 try {
897   Thread.sleep(1000L - (nowMs % 1000L));
898 } catch (InterruptedException ex) {
899   // Try sleeping again and mark the thread as interrupted
900   Thread.currentThread().interrupt();
901 }
{code}
This logic seems flawed.  If we sleep for a whole second, we'll be back in the 
case where nowMs % 1000 is what it was before, and make no progress.  We should 
gracefully handle the case where the sleep is longer than a second by not 
re-throttling.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903242#comment-14903242
 ] 

Rakesh R commented on HDFS-8632:


Thanks again [~zhz]. Attached another patch addressing the above comment.

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch, 
> HDFS-8632-HDFS-7285-03.patch, HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903167#comment-14903167
 ] 

Zhe Zhang commented on HDFS-8632:
-

Thanks Andrew for the input. Per the above discussion I think we should just 
remove stability annotations from all private APIs in the patch.

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch, 
> HDFS-8632-HDFS-7285-03.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8632:
---
Attachment: HDFS-8632-HDFS-7285-04.patch

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch, 
> HDFS-8632-HDFS-7285-03.patch, HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9109) dfs.datanode.dns.interface does not work with hosts file based setups

2015-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903232#comment-14903232
 ] 

Anu Engineer commented on HDFS-9109:


+1 (non-binding)

Had some minor nitpicks :
# In the DNS.java, we now have both java.net.InetAddress and 
com.google.common.net.InetAddresses. Hence you might want to consider renaming 
getIPsAsInetAddresses as something line getIPsAsInetAddressList since you are 
returning the one from java namespace-- same for the comments or you can stop 
using that guava class for IP Address verification.
# As for testing I was wondering if we might be able to leverage 
sun.net.spi.nameservice.NameService and provide our own DNS lookup service ? we 
do have to set the java system variables before the test gets run though. But 
as you said it might be something that we can consider for future.


> dfs.datanode.dns.interface does not work with hosts file based setups
> -
>
> Key: HDFS-9109
> URL: https://issues.apache.org/jira/browse/HDFS-9109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9109.01.patch, HDFS-9109.02.patch, 
> HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903165#comment-14903165
 ] 

Rakesh R commented on HDFS-8632:


Thank you [~zhz], [~andrew.wang] for the reviews.

Hi [~zhz], is there anything else required to be done for this?

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch, HDFS-8632-HDFS-7285-02.patch, 
> HDFS-8632-HDFS-7285-03.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9013) Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903196#comment-14903196
 ] 

Hadoop QA commented on HDFS-9013:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  23m 29s | Pre-patch branch-2 has 5 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   6m  9s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 56s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  6s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 29s | The applied patch generated  1 
new checkstyle issues (total was 304, now 304). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 11s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 19s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 37s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 174m 30s | Tests failed in hadoop-hdfs. |
| | | 248m 49s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761659/HDFS-9013-branch-2.004.patch
 |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / 96e3fbf |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/artifact/patchprocess/branch-2FindbugsWarningshadoop-common.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12597/console |


This message was automatically generated.

> Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk
> --
>
> Key: HDFS-9013
> URL: https://issues.apache.org/jira/browse/HDFS-9013
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9013-branch-2.003.patch, 
> HDFS-9013-branch-2.004.patch, HDFS-9013.001-branch-2.patch, 
> HDFS-9013.001.patch, HDFS-9013.002-branch-2.patch
>
>
> HDFS-8388 added one new metric {{NNStartedTimeInMillis}} to get NN start time 
> in milliseconds.
> Now based on [~wheat9] and [~ajisakaa] suggestions now we should deprecate 
> {{NameNodeMXBean#getNNStarted}} in branch2 and remove from trunk.
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14709614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14709614
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14726746=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14726746



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903238#comment-14903238
 ] 

Hadoop QA commented on HDFS-9076:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 15s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 34s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 36s | The applied patch generated  1 
new checkstyle issues (total was 152, now 152). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 57s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 24s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  64m 55s | Tests failed in hadoop-hdfs. |
| | | 118m 46s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestHDFSConcat |
|   | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestDFSUtil |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestDisableConnCache |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
|   | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.hdfs.server.namenode.TestBlockUnderConstruction |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.namenode.TestNameNodeAcl |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
|   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters |
|   | hadoop.hdfs.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
|   | hadoop.hdfs.server.namenode.TestBackupNode |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
|   | hadoop.hdfs.server.namenode.TestSnapshotPathINodes |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl 

[jira] [Updated] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8920:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

It was commited to HDFS-7285 branch. Thanks Rui for the contribution, Colin and 
Zhe for the suggestions!

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HDFS-8920-HDFS-7285.1.patch, HDFS-8920-HDFS-7285.2.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9111) Move hdfs-client protobuf convert methods from PBHelper to PBHelperClient

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902008#comment-14902008
 ] 

Hudson commented on HDFS-9111:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1161 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1161/])
HDFS-9111. Move hdfs-client protobuf convert methods from PBHelper to 
PBHelperClient. Contributed by Mingliang Liu. (wheat9: rev 
06022b8fdc40e50eaac63758246353058e8cfa6d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/QJournalProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


> Move hdfs-client protobuf convert methods from PBHelper to PBHelperClient
> -
>
> Key: HDFS-9111
> URL: https://issues.apache.org/jira/browse/HDFS-9111
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9111.000.patch, HDFS-9111.001.patch, 
> HDFS-9111.002.patch
>
>
> *TL;DR* This jira tracks the effort of moving PB helper methods, which 
> convert client side data structure to and from protobuf, to the 
> {{hadoop-hdfs-client}} module.
> Currently the {{PBHelper}} class contains helper methods converting both 
> client and server side data structures from/to protobuf. As we move client 
> (and common) classes to {{hadoop-hdfs-client}} module (see [HDFS-8053] and 
> [HDFS-9039]), we also need to move client module related PB converters to 
> client module.
> A good place may be a new class named {{PBHelperClient}}. After this, the 
> existing {{PBHelper}} class stays in {{hadoop-hdfs}} module with converters 
> for converting server side data structures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8920:

Fix Version/s: HDFS-7285

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: HDFS-7285
>
> Attachments: HDFS-8920-HDFS-7285.1.patch, HDFS-8920-HDFS-7285.2.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902027#comment-14902027
 ] 

Zhe Zhang commented on HDFS-8920:
-

Thanks Rui for the work and Kai for the final review. Moving this back to the 
HDFS-7285 umbrella JIRA.

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: HDFS-7285
>
> Attachments: HDFS-8920-HDFS-7285.1.patch, HDFS-8920-HDFS-7285.2.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8780) Fetching live/dead datanode list with arg true for removeDecommissionNode,returns list with decom node.

2015-09-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8780:

Priority: Major  (was: Critical)

> Fetching live/dead datanode list with arg true for 
> removeDecommissionNode,returns list with decom node.
> ---
>
> Key: HDFS-8780
> URL: https://issues.apache.org/jira/browse/HDFS-8780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-8780.1.patch, HDFS-8780.2.patch, HDFS-8780.3.patch
>
>
> Current implementation: 
> ==
> DatanodeManager#removeDecomNodeFromList() , Decommissioned node will be 
> removed from dead/live node list only if below conditions are met
>  I . If the Include list is not empty. 
>  II. If include and exclude list does not have decommissioned node and node 
> state is decommissioned. 
> {code}
>   if (!hostFileManager.hasIncludes()) {
>   return;
>}
>   if ((!hostFileManager.isIncluded(node)) && 
> (!hostFileManager.isExcluded(node))
>   && node.isDecommissioned()) {
> // Include list is not empty, an existing datanode does not appear
> // in both include or exclude lists and it has been decommissioned.
> // Remove it from the node list.
> it.remove();
>   }
> {code}
> As mentioned in javadoc a datanode cannot be in "already decommissioned 
> datanode state".
> Following the steps mentioned in javadoc datanode state is "dead" and not 
> decommissioned.
> *Can we avoid the unnecessary checks and have check for the node is in 
> decommissioned state then remove from node list. ?*
> Please provide your feedback.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8780) Fetching live/dead datanode list with arg true for removeDecommissionNode,returns list with decom node.

2015-09-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8780:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~andreina] for the contribution.

> Fetching live/dead datanode list with arg true for 
> removeDecommissionNode,returns list with decom node.
> ---
>
> Key: HDFS-8780
> URL: https://issues.apache.org/jira/browse/HDFS-8780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-8780.1.patch, HDFS-8780.2.patch, HDFS-8780.3.patch
>
>
> Current implementation: 
> ==
> DatanodeManager#removeDecomNodeFromList() , Decommissioned node will be 
> removed from dead/live node list only if below conditions are met
>  I . If the Include list is not empty. 
>  II. If include and exclude list does not have decommissioned node and node 
> state is decommissioned. 
> {code}
>   if (!hostFileManager.hasIncludes()) {
>   return;
>}
>   if ((!hostFileManager.isIncluded(node)) && 
> (!hostFileManager.isExcluded(node))
>   && node.isDecommissioned()) {
> // Include list is not empty, an existing datanode does not appear
> // in both include or exclude lists and it has been decommissioned.
> // Remove it from the node list.
> it.remove();
>   }
> {code}
> As mentioned in javadoc a datanode cannot be in "already decommissioned 
> datanode state".
> Following the steps mentioned in javadoc datanode state is "dead" and not 
> decommissioned.
> *Can we avoid the unnecessary checks and have check for the node is in 
> decommissioned state then remove from node list. ?*
> Please provide your feedback.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9013) Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk

2015-09-22 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9013:
-
Status: Open  (was: Patch Available)

> Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk
> --
>
> Key: HDFS-9013
> URL: https://issues.apache.org/jira/browse/HDFS-9013
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9013.001-branch-2.patch, HDFS-9013.001.patch, 
> HDFS-9013.002-branch-2.patch
>
>
> HDFS-8388 added one new metric {{NNStartedTimeInMillis}} to get NN start time 
> in milliseconds.
> Now based on [~wheat9] and [~ajisakaa] suggestions now we should deprecate 
> {{NameNodeMXBean#getNNStarted}} in branch2 and remove from trunk.
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14709614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14709614
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14726746=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14726746



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9111) Move hdfs-client protobuf convert methods from PBHelper to PBHelperClient

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902013#comment-14902013
 ] 

Hudson commented on HDFS-9111:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #429 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/429/])
HDFS-9111. Move hdfs-client protobuf convert methods from PBHelper to 
PBHelperClient. Contributed by Mingliang Liu. (wheat9: rev 
06022b8fdc40e50eaac63758246353058e8cfa6d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/QJournalProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InterDatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/JournalProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java


> Move hdfs-client protobuf convert methods from PBHelper to PBHelperClient
> -
>
> Key: HDFS-9111
> URL: https://issues.apache.org/jira/browse/HDFS-9111
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9111.000.patch, HDFS-9111.001.patch, 
> HDFS-9111.002.patch
>
>
> *TL;DR* This jira tracks the effort of moving PB helper methods, which 
> convert client side data structure to and from protobuf, to the 
> {{hadoop-hdfs-client}} module.
> Currently the {{PBHelper}} class contains helper methods converting both 
> client and server side data structures from/to protobuf. As we move client 
> (and common) classes to {{hadoop-hdfs-client}} module (see [HDFS-8053] and 
> [HDFS-9039]), we also need to move client module related PB converters to 
> client module.
> A good place may be a new class named {{PBHelperClient}}. After this, the 
> existing {{PBHelper}} class stays in {{hadoop-hdfs}} module with converters 
> for converting server side data structures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9013) Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk

2015-09-22 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9013:
-
Attachment: (was: HDFS-9013.003-branch-2.patch)

> Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk
> --
>
> Key: HDFS-9013
> URL: https://issues.apache.org/jira/browse/HDFS-9013
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9013.001-branch-2.patch, HDFS-9013.001.patch, 
> HDFS-9013.002-branch-2.patch
>
>
> HDFS-8388 added one new metric {{NNStartedTimeInMillis}} to get NN start time 
> in milliseconds.
> Now based on [~wheat9] and [~ajisakaa] suggestions now we should deprecate 
> {{NameNodeMXBean#getNNStarted}} in branch2 and remove from trunk.
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14709614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14709614
> https://issues.apache.org/jira/browse/HDFS-8388?focusedCommentId=14726746=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14726746



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8920:

Parent Issue: HDFS-7285  (was: HDFS-8031)

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: HDFS-7285
>
> Attachments: HDFS-8920-HDFS-7285.1.patch, HDFS-8920-HDFS-7285.2.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8920) Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt performance

2015-09-22 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902070#comment-14902070
 ] 

Rui Li commented on HDFS-8920:
--

Thanks guys for the review.

> Erasure Coding: when recovering lost blocks, logs can be too verbose and hurt 
> performance
> -
>
> Key: HDFS-8920
> URL: https://issues.apache.org/jira/browse/HDFS-8920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: HDFS-7285
>
> Attachments: HDFS-8920-HDFS-7285.1.patch, HDFS-8920-HDFS-7285.2.patch
>
>
> When we test reading data with datanodes killed, 
> {{DFSInputStream::getBestNodeDNAddrPair}} becomes a hot spot method and 
> effectively blocks the client JVM. This log seems too verbose:
> {code}
> if (chosenNode == null) {
>   DFSClient.LOG.warn("No live nodes contain block " + block.getBlock() +
>   " after checking nodes = " + Arrays.toString(nodes) +
>   ", ignoredNodes = " + ignoredNodes);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9123:
-

 Summary: Validation of a path ended with a '/'
 Key: HDFS-9123
 URL: https://issues.apache.org/jira/browse/HDFS-9123
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
/abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
/abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)

However, if the source path is ended with a '/' path separator, the existing 
validation for sub-directories fails. For example, copying from / to /abc would 
cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9057) allow/disallow snapshots via webhdfs

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902975#comment-14902975
 ] 

Hadoop QA commented on HDFS-9057:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 24s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 218m 28s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.cli.TestHDFSCLI |
| Timed out tests | 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761635/HDFS-9057-002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 57003fa |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12592/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12592/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12592/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12592/console |


This message was automatically generated.

> allow/disallow snapshots via webhdfs
> 
>
> Key: HDFS-9057
> URL: https://issues.apache.org/jira/browse/HDFS-9057
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9057-002.patch, HDFS-9057.patch
>
>
> We should be able to allow and disallow directories for snapshotting via 
> WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9095) RPC client should fail gracefully when the connection is timed out or reset

2015-09-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9095:
-
Attachment: HDFS-9095.001.patch

> RPC client should fail gracefully when the connection is timed out or reset
> ---
>
> Key: HDFS-9095
> URL: https://issues.apache.org/jira/browse/HDFS-9095
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9095.000.patch, HDFS-9095.001.patch
>
>
> The RPC client should fail gracefully when the connection is timed out or 
> reset. instead of bailing out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2015-09-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903030#comment-14903030
 ] 

Zhe Zhang commented on HDFS-7955:
-

[~rakeshr] I think Andrew's suggestion above is consistent with your proposal 
("recovery" => "reconstruction"), and I recommend we do the renaming together 
under this JIRA. 

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9124) NullPointerException when underreplicated blocks are there

2015-09-22 Thread Syed Akram (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903060#comment-14903060
 ] 

Syed Akram commented on HDFS-9124:
--

when under replicated blocks are transferring from source to destination 
datanode, and the destination is stopped while replicating the block, then this 
issue has been occured.

> NullPointerException when underreplicated blocks are there
> --
>
> Key: HDFS-9124
> URL: https://issues.apache.org/jira/browse/HDFS-9124
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Syed Akram
>Assignee: Syed Akram
>
> 2015-09-22 09:48:47,830 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: dn1:50010:DataXceiver error 
> processing WRITE_BLOCK operation  src: /dn1:42973 dst: /dn2:50010
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:186)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:677)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9103) Retry reads on DN failure

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903050#comment-14903050
 ] 

Hadoop QA commented on HDFS-9103:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 17s | Pre-patch HDFS-8707 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 21s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761680/HDFS-9103.HDFS-8707.5.patch
 |
| Optional Tests | javac unit |
| git revision | HDFS-8707 / 5d912ea |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12602/console |


This message was automatically generated.

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.3.patch, HDFS-9103.HDFS-8707.4.patch, 
> HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9109) dfs.datanode.dns.interface does not work with hosts file based setups

2015-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9109:

Attachment: HDFS-9109.03.patch

v03 patch attached. This fix is hard to add a unit test without refactoring the 
DNS class significantly since the dependencies on system DNS and IP address 
lookup are all over the class and cannot be stubbed out for testing.

I started the refactor but that makes the patch larger (and riskier). I'd 
rather keep this change minimal, and hopefully we can defer adding the unit 
test.

> dfs.datanode.dns.interface does not work with hosts file based setups
> -
>
> Key: HDFS-9109
> URL: https://issues.apache.org/jira/browse/HDFS-9109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9109.01.patch, HDFS-9109.02.patch, 
> HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Improve upon HDFS-8480

2015-09-22 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: Open  (was: Patch Available)

> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Improve upon HDFS-8480

2015-09-22 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Status: Patch Available  (was: Open)

> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Improve upon HDFS-8480

2015-09-22 Thread Charlie Helin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Helin updated HDFS-9110:

Attachment: HDFS-9110.05.patch

Removed unused imports


> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9095) RPC client should fail gracefully when the connection is timed out or reset

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903152#comment-14903152
 ] 

Hadoop QA commented on HDFS-9095:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761698/HDFS-9095.001.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / cc2b473 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12605/console |


This message was automatically generated.

> RPC client should fail gracefully when the connection is timed out or reset
> ---
>
> Key: HDFS-9095
> URL: https://issues.apache.org/jira/browse/HDFS-9095
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9095.000.patch, HDFS-9095.001.patch
>
>
> The RPC client should fail gracefully when the connection is timed out or 
> reset. instead of bailing out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: HDFS-8873.005.patch

Here's a new patch with greatly reduced scope.  [~cmccabe] and [~nroberts], 
please have a look.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-9124) NullPointerException when underreplicated blocks are there

2015-09-22 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash moved HADOOP-12429 to HDFS-9124:
-

Affects Version/s: (was: 2.7.1)
   2.7.1
  Key: HDFS-9124  (was: HADOOP-12429)
  Project: Hadoop HDFS  (was: Hadoop Common)

> NullPointerException when underreplicated blocks are there
> --
>
> Key: HDFS-9124
> URL: https://issues.apache.org/jira/browse/HDFS-9124
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Syed Akram
>Assignee: Syed Akram
>
> 2015-09-22 09:48:47,830 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: dn1:50010:DataXceiver error 
> processing WRITE_BLOCK operation  src: /dn1:42973 dst: /dn2:50010
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:186)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:677)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9124) NullPointerException when underreplicated blocks are there

2015-09-22 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903023#comment-14903023
 ] 

Ravi Prakash commented on HDFS-9124:


Thanks for reporting the issue Syed! Could you please detail when you noticed 
this issue? Perhaps steps to reproduce it (if you have them)? This line in 
BlockReceiver.java
{noformat}
185  if (isDatanode) { //replication or move
186replicaHandler = datanode.data.createTemporary(storageType, block);
187  } else {
{noformat}
 was recently modified as https://issues.apache.org/jira/browse/HDFS-7496 . 
[~eddyxu] Do you have any ideas?

> NullPointerException when underreplicated blocks are there
> --
>
> Key: HDFS-9124
> URL: https://issues.apache.org/jira/browse/HDFS-9124
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Syed Akram
>Assignee: Syed Akram
>
> 2015-09-22 09:48:47,830 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: dn1:50010:DataXceiver error 
> processing WRITE_BLOCK operation  src: /dn1:42973 dst: /dn2:50010
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:186)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:677)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Improve upon HDFS-8480

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903033#comment-14903033
 ] 

Hadoop QA commented on HDFS-9110:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  6s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 19s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  3 
new checkstyle issues (total was 2, now 4). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 10s | Tests failed in hadoop-hdfs. |
| | | 209m 12s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761650/HDFS-9110.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 57003fa |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12596/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12596/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12596/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12596/console |


This message was automatically generated.

> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8696) Reduce the variances of latency of WebHDFS

2015-09-22 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-8696:
-
Attachment: HDFS-8696.005.patch

That seems a good, low-risk approach to the issue.

I took the liberty of trimming the patch to just those changes.

> Reduce the variances of latency of WebHDFS
> --
>
> Key: HDFS-8696
> URL: https://issues.apache.org/jira/browse/HDFS-8696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-8696.004.patch, HDFS-8696.005.patch, 
> HDFS-8696.1.patch, HDFS-8696.2.patch, HDFS-8696.3.patch
>
>
> There is an issue that appears related to the webhdfs server. When making two 
> concurrent requests, the DN will sometimes pause for extended periods (I've 
> seen 1-300 seconds), killing performance and dropping connections. 
> To reproduce: 
> 1. set up a HDFS cluster
> 2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
> the time out to /tmp/times.txt
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root=1";
> done
> {noformat}
> 3. Watch for 1-byte requests that take more than one second:
> tail -F /tmp/times.txt | grep -E "^[^0]"
> 4. After it has had a chance to warm up, start doing large transfers from
> another shell:
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root";
> done
> {noformat}
> It's easy to find after a minute or two that small reads will sometimes
> pause for 1-300 seconds. In some extreme cases, it appears that the
> transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9107) Prevent NN's unrecoverable death spiral after full GC

2015-09-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903048#comment-14903048
 ] 

Colin Patrick McCabe commented on HDFS-9107:


I guess if we want to be 100% correct, we have to do the stopwatch check right 
after getting back a "true" result from {{DatanodeManager#isDatanodeDead}}, 
right?  Otherwise we could always have a TOCTOU where we have a long GC pause 
right before calling that function.  What do you think?

> Prevent NN's unrecoverable death spiral after full GC
> -
>
> Key: HDFS-9107
> URL: https://issues.apache.org/jira/browse/HDFS-9107
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-9107.patch, HDFS-9107.patch
>
>
> A full GC pause in the NN that exceeds the dead node interval can lead to an 
> infinite cycle of full GCs.  The most common situation that precipitates an 
> unrecoverable state is a network issue that temporarily cuts off multiple 
> racks.
> The NN wakes up and falsely starts marking nodes dead. This bloats the 
> replication queues which increases memory pressure. The replications create a 
> flurry of incremental block reports and a glut of over-replicated blocks.
> The "dead" nodes heartbeat within seconds. The NN forces a re-registration 
> which requires a full block report - more memory pressure. The NN now has to 
> invalidate all the over-replicated blocks. The extra blocks are added to 
> invalidation queues, tracked in an excess blocks map, etc - much more memory 
> pressure.
> All the memory pressure can push the NN into another full GC which repeats 
> the entire cycle.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9109) dfs.datanode.dns.interface does not work with hosts file based setups

2015-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9109:

Attachment: (was: HDFS-9109.03.patch)

> dfs.datanode.dns.interface does not work with hosts file based setups
> -
>
> Key: HDFS-9109
> URL: https://issues.apache.org/jira/browse/HDFS-9109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9109.01.patch, HDFS-9109.02.patch, 
> HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9103) Retry reads on DN failure

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903140#comment-14903140
 ] 

Haohui Mai commented on HDFS-9103:
--

bq. I propose (perhaps in another jira), that we separate FileHandle (stream 
state such as position, previously failed DataNodes, etc.), FileInfo (file 
length, LocatedBlocks, etc.), and ReadOperation (ephemeral state for an async 
read such as Continuations and refs to FileInfo) as a good model.

This is a good idea. The C layer can be  the first consumer of these APIs. 
Maybe we can look at the first iterations of the C APIs and come back to this 
jira? I think the experience on building the C layer will be valuable in this 
jira.

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.3.patch, HDFS-9103.HDFS-8707.4.patch, 
> HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2015-09-22 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903144#comment-14903144
 ] 

Rakesh R commented on HDFS-7955:


OK, thank [~zhz] for the reply. As the jira is marked for second phase, will 
take up this task later.

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-22 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9076:
-
Attachment: HDFS-9076.01.patch

Thanks [~vinayrpet] for review,
Attached updated patch, Please review...

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.01.patch, HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9122) DN automatically add more volumes to avoid large volume

2015-09-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14902920#comment-14902920
 ] 

Colin Patrick McCabe commented on HDFS-9122:


It is an interesting idea, but I think moving to automatically created volumes 
is a pretty big step to take.  It would violate a lot of assumptions currently 
in the code.  "Unsplitting" volumes when blocks are removed would also be 
tricky.

Also, just like HDFS-9011, this doesn't solve the main problem with super-large 
block reports, which is time consumed on the NN for the processing.  I think 
federation is a better workaround in the short term than this JIRA.  We need 
better long term solutions, of course.

> DN automatically add more volumes to avoid large volume
> ---
>
> Key: HDFS-9122
> URL: https://issues.apache.org/jira/browse/HDFS-9122
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Walter Su
>
> Currently if a DataNode has too many blocks, it partition blockReport by 
> storage. In practice, we've seen a single storage can contains large amount 
> of blocks and the report even exceeds the max RPC data length. Storage 
> density increases quickly, a DataNode can hold more and more blocks. It's 
> harder to include so many blocks in one RPC report. One option is "Support 
> splitting BlockReport of a storage into multiple RPC"(HDFS-9011). 
> I'm thinking maybe we could add more "logical" volumes (more storage 
> directories in one device). DataNodeStorageInfo in NameNode is cheap. And 
> Processing a single blockReport need NN hold the lock, so splitting one big 
> volume to many volume can avoid a single processing hold lock too long.
> We can support wildcard in dfs.datanode.data.dir. Like 
> /physical-volume/dfs/data/dir*
> When a volume exceeds threshold(like 1m blocks), DN automatically create a 
> new storage directory, also a volume. We have to change 
> RoundRobinVolumeChoosingPolicy as well, once we chosen a physical volume, we 
> choose the logical volume which has least number of blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8882) Use datablocks, parityblocks and cell size from ErasureCodingPolicy

2015-09-22 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8882:

Attachment: HDFS-8882-HDFS-7285-03.patch

Updated the patch as per [~zhz]'s comments.

Please review.


> Use datablocks, parityblocks and cell size from ErasureCodingPolicy
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch, 
> HDFS-8882-HDFS-7285-02.patch, HDFS-8882-HDFS-7285-03.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9109) dfs.datanode.dns.interface does not work with hosts file based setups

2015-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9109:

Attachment: HDFS-9109.03.patch

> dfs.datanode.dns.interface does not work with hosts file based setups
> -
>
> Key: HDFS-9109
> URL: https://issues.apache.org/jira/browse/HDFS-9109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9109.01.patch, HDFS-9109.02.patch, 
> HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9095) RPC client should fail gracefully when the connection is timed out or reset

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903134#comment-14903134
 ] 

Haohui Mai commented on HDFS-9095:
--

Thanks [~James Clampffer] and [~bobhansen] for the reviews. The v1 patch 
changes {{CMAKE_CURRENT_SOURCE_DIR}} to {{CMAKE_CURRENT_LIST_DIR}}.

> RPC client should fail gracefully when the connection is timed out or reset
> ---
>
> Key: HDFS-9095
> URL: https://issues.apache.org/jira/browse/HDFS-9095
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9095.000.patch, HDFS-9095.001.patch
>
>
> The RPC client should fail gracefully when the connection is timed out or 
> reset. instead of bailing out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903415#comment-14903415
 ] 

Daniel Templeton commented on HDFS-8873:


bq. Can we have a constant here for MS_PER_SEC? I think I commented on this 
earlier

I didn't do that in this patch because I didn't think the 1000 was as prominent 
as before, but it appears that was before I was done adding stuff.  I'll put it 
back.  I'd love to put that constant somewhere like util.Time.  Would that be 
kosher?  Or better keep it low profile and leave it local to DirectoryScanner?  
I notice there's already HdfsClientConfigKeys.SECOND, but that would introduce 
an pointless dependency.  May the best answer is to keep it local and file a 
JIRA to consolidate them under util.Time?

bq. Maybe say "throttle" instead of "run limit"?

I was shooting for something that would be meaningful to someone who doesn't 
know the code.  What about "throttle limit," since that echoes the config param?

bq. Does this need to be an object, or can it be a primitive?

Ha. Evolutionary mistake. I'll fix it.

bq. This logic seems flawed.

I don't follow.  (nowMs % 1000) has to be between 0 and 999.  If it's less than 
the throttle limit, we won't enter the loop.  The throttle limit must be 
between 1 and 1000.  (Anything else gets set to 1000 when the scanner is 
created.)  The sleep must therefore be for between 1 and 999 ms, pretty much 
guaranteeing a different result the next time around.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9053) Support large directories efficiently using B-Tree

2015-09-22 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903725#comment-14903725
 ] 

Yi Liu edited comment on HDFS-9053 at 9/23/15 12:40 AM:


The Jenkins has some issue and the test failures are not related. 
The reason is: having multiple maven invocations going on at once sharing the 
same .m2 directory on the same machine, this patch includes new class file in 
Hadoop-common, when some other maven invocations run after this one on the same 
machine,  "mvn install" of test will replace the hadoop-common jar in .m2 
directory, so the test failures in Jenkins show NoClassDefFoundError for the 
new added class in hadoop-common.

I will find some other time to re-trigger the Jenkins when there are few 
Jenkins jobs on the list.


was (Author: hitliuyi):
The Jenkins has some issue and the test failures are not related. 
The reason is: having multiple maven invocations going on at once sharing the 
same .m2 directory on the same machine, this patch includes new class file in 
Hadoop-common, when some other maven invocations run after this one on the same 
machine,  "mvn install" of test will replace the hadoop-common jar in .m2 
directory, so the test failures in Jenkins show NoClassDefFoundError for the 
new added class in hadoop-common.

> Support large directories efficiently using B-Tree
> --
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053 
> (BTree).patch, HDFS-9053.001.patch, HDFS-9053.002.patch
>
>
> This is a long standing issue, we were trying to improve this in the past.  
> Currently we use an ArrayList for the children under a directory, and the 
> children are ordered in the list, for insert/delete/search, the time 
> complexity is O(log n), but insertion/deleting causes re-allocations and 
> copies of big arrays, so the operations are costly.  For example, if the 
> children grow to 1M size, the ArrayList will resize to > 1M capacity, so need 
> > 1M * 4bytes = 4M continuous heap memory, it easily causes full GC in HDFS 
> cluster where namenode heap memory is already highly used.  I recap the 3 
> main issues:
> # Insertion/deletion operations in large directories are expensive because 
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be 
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still 
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to 
> solve the problem suggested by [~shv]. 
> So the target of this JIRA is to implement a low memory footprint B-Tree and 
> use it to replace ArrayList. 
> If the elements size is not large (less than the maximum degree of B-Tree 
> node), the B-Tree only has one root node which contains an array for the 
> elements. And if the size grows large enough, it will split automatically, 
> and if elements are removed, then B-Tree nodes can merge automatically (see 
> more: https://en.wikipedia.org/wiki/B-tree).  It will solve the above 3 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9080) update htrace version to 4.0

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903721#comment-14903721
 ] 

Hadoop QA commented on HDFS-9080:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  25m 30s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 11 new or modified test files. |
| {color:red}-1{color} | javac |   0m 25s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761759/HDFS-9080.010.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / cc2b473 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12612/console |


This message was automatically generated.

> update htrace version to 4.0
> 
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ErasureCodingPolicy

2015-09-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903813#comment-14903813
 ] 

Walter Su commented on HDFS-8882:
-

+1

> Use datablocks, parityblocks and cell size from ErasureCodingPolicy
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch, 
> HDFS-8882-HDFS-7285-02.patch, HDFS-8882-HDFS-7285-03.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-09-22 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903774#comment-14903774
 ] 

Ming Ma commented on HDFS-8647:
---

Thanks [~brahmareddy] for the useful analysis! My preference is to keep the 
current behavior so that auto replication continue to work as long as there is 
no NN restart. That provides stronger data durability.

Ideally HDFS should support auto replication for the case where # of rack 
changes from 1 to 2 after NN's restart. But that requires more work and it is 
an existing issue. You can open a new jira for that.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-22 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7529:
---
Attachment: HDFS-7529-004.patch

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529.000.patch, HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9053) Support large directories efficiently using B-Tree

2015-09-22 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903725#comment-14903725
 ] 

Yi Liu commented on HDFS-9053:
--

The Jenkins has some issue and the test failures are not related. 
The reason is: having multiple maven invocations going on at once sharing the 
same .m2 directory on the same machine, this patch includes new class file in 
Hadoop-common, when some other maven invocations run after this one on the same 
machine,  "mvn install" of test will replace the hadoop-common jar in .m2 
directory, so the test failures in Jenkins show NoClassDefFoundError for the 
new added class in hadoop-common.

> Support large directories efficiently using B-Tree
> --
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053 
> (BTree).patch, HDFS-9053.001.patch, HDFS-9053.002.patch
>
>
> This is a long standing issue, we were trying to improve this in the past.  
> Currently we use an ArrayList for the children under a directory, and the 
> children are ordered in the list, for insert/delete/search, the time 
> complexity is O(log n), but insertion/deleting causes re-allocations and 
> copies of big arrays, so the operations are costly.  For example, if the 
> children grow to 1M size, the ArrayList will resize to > 1M capacity, so need 
> > 1M * 4bytes = 4M continuous heap memory, it easily causes full GC in HDFS 
> cluster where namenode heap memory is already highly used.  I recap the 3 
> main issues:
> # Insertion/deletion operations in large directories are expensive because 
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be 
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still 
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to 
> solve the problem suggested by [~shv]. 
> So the target of this JIRA is to implement a low memory footprint B-Tree and 
> use it to replace ArrayList. 
> If the elements size is not large (less than the maximum degree of B-Tree 
> node), the B-Tree only has one root node which contains an array for the 
> elements. And if the size grows large enough, it will split automatically, 
> and if elements are removed, then B-Tree nodes can merge automatically (see 
> more: https://en.wikipedia.org/wiki/B-tree).  It will solve the above 3 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9109) dfs.datanode.dns.interface does not work with hosts file based setups

2015-09-22 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903739#comment-14903739
 ] 

Jitendra Nath Pandey commented on HDFS-9109:


[~arpitagarwal], getIPs method has some common logic as getIPsAsInetAddresses 
in DNS.java. Is it possible to refactor to re-use some of the logic?



> dfs.datanode.dns.interface does not work with hosts file based setups
> -
>
> Key: HDFS-9109
> URL: https://issues.apache.org/jira/browse/HDFS-9109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9109.01.patch, HDFS-9109.02.patch, 
> HDFS-9109.03.patch
>
>
> The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode 
> select its hostname by doing a reverse lookup of IP addresses on the specific 
> network interface. This does not work {{when /etc/hosts}} is used to setup 
> alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903858#comment-14903858
 ] 

Hadoop QA commented on HDFS-9039:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 56s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m  7s | The applied patch generated  
11 new checkstyle issues (total was 0, now 11). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 193m 29s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 31s | Tests passed in 
hadoop-hdfs-client. |
| | | 242m 43s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761752/HDFS-9039.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12610/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12610/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12610/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12610/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12610/console |


This message was automatically generated.

> Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client 
> and hadoop-hdfs modules respectively
> --
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903420#comment-14903420
 ] 

Hadoop QA commented on HDFS-9039:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 28s | The applied patch generated  7 
new checkstyle issues (total was 0, now 7). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 37s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 20s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 213m 54s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761557/HDFS-9039.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12601/console |


This message was automatically generated.

> Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client 
> and hadoop-hdfs modules respectively
> --
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Improve upon HDFS-8480

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903544#comment-14903544
 ] 

Hadoop QA commented on HDFS-9110:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 23s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 23s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 159m  9s | Tests failed in hadoop-hdfs. |
| | | 205m 27s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | org.apache.hadoop.hdfs.server.namenode.TestEditLogJournalFailures |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.mover.TestStorageMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761691/HDFS-9110.05.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12606/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12606/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12606/console |


This message was automatically generated.

> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903429#comment-14903429
 ] 

Haohui Mai commented on HDFS-9039:
--

+1

> Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client 
> and hadoop-hdfs modules respectively
> --
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903519#comment-14903519
 ] 

Hadoop QA commented on HDFS-8733:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 16s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 18s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 46s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 157m 29s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 33s | Tests passed in 
hadoop-hdfs-client. |
| {color:green}+1{color} | hdfs tests |   6m 34s | Tests passed in bkjournal. |
| | | 218m 36s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761555/HFDS-8733.000.patch |
| Optional Tests | javac unit javadoc findbugs checkstyle |
| git revision | trunk / cc2b473 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12604/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12604/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12604/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12604/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12604/console |


This message was automatically generated.

> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ErasureCodingPolicy

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903492#comment-14903492
 ] 

Hadoop QA commented on HDFS-8882:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 53s | Findbugs (version 3.0.0) 
appears to be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 26 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  6s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 10s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  6s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 232m 45s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 37s | Tests passed in 
hadoop-hdfs-client. |
| | | 282m 22s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestWriteStripedFileWithFailure |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
| Timed out tests | 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | org.apache.hadoop.hdfs.TestFileCreation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761674/HDFS-8882-HDFS-7285-03.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 6fc9424 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/artifact/patchprocess/HDFS-7285FindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12599/console |


This message was automatically generated.

> Use datablocks, parityblocks and cell size from ErasureCodingPolicy
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch, 
> HDFS-8882-HDFS-7285-02.patch, HDFS-8882-HDFS-7285-03.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903507#comment-14903507
 ] 

Colin Patrick McCabe commented on HDFS-8873:


bq. I didn't do that in this patch because I didn't think the 1000 was as 
prominent as before, but it appears that was before I was done adding stuff. 
I'll put it back. I'd love to put that constant somewhere like util.Time. Would 
that be kosher?

I think it's fine to put it in the DirectoryScanner itself if you want.  I 
don't object to putting it in time either.  Up to you.

bq. I was shooting for something that would be meaningful to someone who 
doesn't know the code. What about "throttle limit," since that echoes the 
config param?

Sure.

bq. I don't follow. (nowMs % 1000) has to be between 0 and 999. If it's less 
than the throttle limit, we won't enter the loop. The throttle limit must be 
between 1 and 1000. (Anything else gets set to 1000 when the scanner is 
created.) The sleep must therefore be for between 1 and 999 ms, pretty much 
guaranteeing a different result the next time around.

Let's say we start the loop at time 5200.  Then {{while (nowMs % 1000L > 
throttleLimitMsPerSec)}} returns true (let's say {{throttleLimitMsPerSec = 
100}}).

We call sleep with an argument of 800, but sleep actually sleeps for 1000 ms 
instead.  (Remember, Thread#sleep may always sleep for longer than requested.)  
nowMs becomes 6200.  Now {{while (nowMs % 1000L > throttleLimitMsPerSec) }} 
returns true again, since 6200 % 1000 = 200 > 100.  Now we sleep again for 800 
ms yet again.  We completely missed our timeslice, and there's no guarantee 
that we'll pick up the next one either.  That's the bug.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch, 
> HDFS-8873.003.patch, HDFS-8873.004.patch, HDFS-8873.005.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8696) Reduce the variances of latency of WebHDFS

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903432#comment-14903432
 ] 

Hadoop QA commented on HDFS-8696:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  7s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 21s | The applied patch generated  4 
new checkstyle issues (total was 405, now 409). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 32s | Tests failed in hadoop-hdfs. |
| | | 208m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.TestSafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761687/HDFS-8696.005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12603/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12603/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12603/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12603/console |


This message was automatically generated.

> Reduce the variances of latency of WebHDFS
> --
>
> Key: HDFS-8696
> URL: https://issues.apache.org/jira/browse/HDFS-8696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-8696.004.patch, HDFS-8696.005.patch, 
> HDFS-8696.1.patch, HDFS-8696.2.patch, HDFS-8696.3.patch
>
>
> There is an issue that appears related to the webhdfs server. When making two 
> concurrent requests, the DN will sometimes pause for extended periods (I've 
> seen 1-300 seconds), killing performance and dropping connections. 
> To reproduce: 
> 1. set up a HDFS cluster
> 2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
> the time out to /tmp/times.txt
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root=1";
> done
> {noformat}
> 3. Watch for 1-byte requests that take more than one second:
> tail -F /tmp/times.txt | grep -E "^[^0]"
> 4. After it has had a chance to warm up, start doing large transfers from
> another shell:
> {noformat}
> i=1
> while (true); do 
> echo $i
> let i++
> /usr/bin/time -f %e curl -s -L -o /dev/null 
> "http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN=root";
> done
> {noformat}
> It's easy to find after a minute or two that small reads will sometimes
> pause for 1-300 seconds. In some extreme cases, it appears that the
> transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9108) InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers

2015-09-22 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903536#comment-14903536
 ] 

James Clampffer commented on HDFS-9108:
---

Thanks for the patch!  Looks like everything works; I'm going to write a test 
that uses more threads and check it out just to be safe.

Re: I'm surprised that running the inputstream_test under valgrind fails to 
uncover the problem.
>From my understanding of how inputstream_test works (correct me if I'm wrong) 
>it looks like everything is running in a single thread.  So the MockConnection 
>emulates the the various asio calls on the same stack and they'll look more 
>like blocking calls at runtime.  If that's the case the referenced object will 
>still be living on the stack so the reference will point to valid memory and 
>valgrind won't complain.

> InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers
> ---
>
> Key: HDFS-9108
> URL: https://issues.apache.org/jira/browse/HDFS-9108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: Ubuntu x86_64, gcc 4.8.2
>Reporter: James Clampffer
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: 9108-async-repro.patch, 9108-async-repro.patch1, 
> HDFS-9108.000.patch
>
>
> Somewhere between InputStream->PositionRead and the asio code the pointer to 
> the destination buffer gets lost.  PositionRead will correctly return the 
> number of bytes read but the buffer won't be filled.
> This only seems to effect the remote_block_reader, RPC calls are working.
> Valgrind error:
> Syscall param recvmsg(msg.msg_iov) points to uninitialised byte(s)
> msg.msg_iov[0] should equal the buffer pointer passed to PositionRead
> Hit when using a promise to make the async call block until completion. 
> auto stat = std::make_shared();
> std::future future(stat->get_future());
> size_t readCount = 0;
> auto h = [stat, ,buf](const Status , size_t bytes) {
>   stat->set_value(s);
>   readCount = bytes;
> };
> char buf[50];
> inputStream->PositionRead(buf, 50, 0, h);
>   
> //wait for async to finish
> future.get();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-22 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903889#comment-14903889
 ] 

Walter Su commented on HDFS-9040:
-

bq. 1. Flush out all the enqueued data to DataNodes before handling failures 
and bumping GS.
Great. It's much simpler. In checkStreamerFailures(boolean toClose), you will 
flushAllInternals anyway before start handling. It doesn't hurt to flush twice. 
So {{toClose}} is unnecessary?
bq. 3. During the test I found that some data streamer may take a long time to 
close/create datanode connections. This may cause other streamers' connections 
timeout. Thus the new patch adds an upper bound for the total waiting time of 
creating datanode connections during failure handling.
bq. +   && remaingTime > waitInterval * 2) {
It's not good enough approach. {{socketTimeout}} is 6s by default. Here you 
wait at most 4s. I remember you just flushAllInternals() before. When 
dataQueue.size()==0, a healthy streamer could in sleep for at most 
{{halfSocketTimeout}}, aka 3s. So you give this streamer 1s left to create 
blockStream and offer updateStreamerMap. If it doesn't finish in 1s, you kill 
it.
I think we should notify every dataQueues to wake up streamers after 
markExternalErrorOnStreamers(), so every streamer has 4s. And it would be 
better if streamers start sending heartbeat packet in the middle of waiting 
other streamers, but it's too hard.
bq. 2.Instead of let each DataStreamer write their own last empty packet of the 
block, we do it in the StripedOutputStream level so that we can still bump GS 
for failure handling before some streamers close their internal blocks.
{code}
if (shouldEndBlockGroup()) {
  for (int i = 0; i < numAllBlocks; i++) {
final StripedDataStreamer s = setCurrentStreamer(i);
if (s.isHealthy()) {
  endBlock();
}
  }
}
{code}
The logic looks good. Before we have a solution for PIPELINE_CLOSE_RECOVERY, 
should we catch the exception thrown by endBlock() and ignore it?

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Jing Zhao
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040-HDFS-7285.004.patch, 
> HDFS-9040.00.patch, HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9039:
-
Summary: Separate client and server side methods of 
o.a.h.hdfs.NameNodeProxies  (was: Split o.a.h.hdfs.NameNodeProxies class into 
two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively)

> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903909#comment-14903909
 ] 

Haohui Mai commented on HDFS-8733:
--

+1. Will commit shortly.

> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9039:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9125) Display help if the command option to "hdfs dfs " is not valid

2015-09-22 Thread nijel (JIRA)
nijel created HDFS-9125:
---

 Summary: Display help if the  command option to "hdfs dfs " is not 
valid
 Key: HDFS-9125
 URL: https://issues.apache.org/jira/browse/HDFS-9125
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel
Priority: Minor


{noformat}
master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs
-mkdirs: Unknown command
{noformat}

Better to display the help info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903901#comment-14903901
 ] 

Haohui Mai commented on HDFS-9039:
--

+1 on the latest patch. Will commit shortly.

> Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client 
> and hadoop-hdfs modules respectively
> --
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903906#comment-14903906
 ] 

Hadoop QA commented on HDFS-9040:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 23s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  1s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m 35s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 50s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 187m 46s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 234m 53s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761763/HDFS-9040-HDFS-7285.004.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 6fc9424 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12611/console |


This message was automatically generated.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Jing Zhao
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040-HDFS-7285.004.patch, 
> HDFS-9040.00.patch, HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> -Proposal 1:-
> -A BlockGroupDataStreamer to communicate with NN to allocate/update block, 
> and StripedDataStreamer s only have to stream blocks to DNs.-
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903921#comment-14903921
 ] 

Hudson commented on HDFS-9039:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8501 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8501/])
HDFS-9039. Separate client and server side methods of 
o.a.h.hdfs.NameNodeProxies. Contributed by Mingliang Liu. (wheat9: rev 
63d9f1596c92206cce3b72e3214d2fb5f6242b90)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java


> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903922#comment-14903922
 ] 

Hudson commented on HDFS-8733:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8501 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8501/])
HDFS-8733. Keep server related definition in hdfs.proto on server side. 
Contributed by Mingliang Liu. (wheat9: rev 
7c5c099324d9168114be2f1233d49fdb65a8c1f2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/JournalProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/QJournalProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/proto/bkjournal.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java


> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Attachment: HDFS-9123.002.patch

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch, HDFS-9123.002.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Status: Open  (was: Patch Available)

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch, HDFS-9123.002.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9120) Metric logging values are truncated in NN Metrics log.

2015-09-22 Thread Kanaka Kumar Avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904002#comment-14904002
 ] 

Kanaka Kumar Avvaru commented on HDFS-9120:
---

Thanks for reporting the issue [~archanat].

I think message was truncated to avoid excessive logging. But true there should 
be option to get the complete information or limit the content size in 
different way by extracting  important details (Ex- Log only live node name 
ignoring all other details like infoAddr,etc.)

For simplicity I am thinking to make the truncate length configurable and don't 
truncate by default . [~arpitagarwal], do you have any other view on this?

> Metric logging values are truncated in NN Metrics log.
> --
>
> Key: HDFS-9120
> URL: https://issues.apache.org/jira/browse/HDFS-9120
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Reporter: Archana T
>Assignee: Kanaka Kumar Avvaru
>
> In namenode-metrics.log when metric name value pair is more than 128 
> characters, it is truncated as below --
> Example for LiveNodes information is ---
> vi namenode-metrics.log
> {color:red}
> 2015-09-22 10:34:37,891 
> NameNodeInfo:LiveNodes={"host-10-xx-xxx-88:50076":{"infoAddr":"10.xx.xxx.88:0","infoSecureAddr":"10.xx.xxx.88:52100","xferaddr":"10.xx.xxx.88:50076","l...
> {color}
> Here complete information of metric value is not logged.
> etc information being displayed as "..."
> Silimarly for other metric values in NN metrics.
> where as DN metric logs complete metric values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903948#comment-14903948
 ] 

Hudson commented on HDFS-9039:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #425 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/425/])
HDFS-9039. Separate client and server side methods of 
o.a.h.hdfs.NameNodeProxies. Contributed by Mingliang Liu. (wheat9: rev 
63d9f1596c92206cce3b72e3214d2fb5f6242b90)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java


> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903949#comment-14903949
 ] 

Hudson commented on HDFS-8733:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #425 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/425/])
HDFS-8733. Keep server related definition in hdfs.proto on server side. 
Contributed by Mingliang Liu. (wheat9: rev 
7c5c099324d9168114be2f1233d49fdb65a8c1f2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/proto/bkjournal.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/JournalProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/QJournalProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java


> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7529) Consolidate encryption zone related implementation into a single class

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903956#comment-14903956
 ] 

Hadoop QA commented on HDFS-7529:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  1 
new checkstyle issues (total was 354, now 354). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  99m  2s | Tests failed in hadoop-hdfs. |
| | | 144m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
| Timed out tests | org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2 
|
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761787/HDFS-7529-004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12614/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12614/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12614/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12614/console |


This message was automatically generated.

> Consolidate encryption zone related implementation into a single class
> --
>
> Key: HDFS-7529
> URL: https://issues.apache.org/jira/browse/HDFS-7529
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7529-002.patch, HDFS-7529-003.patch, 
> HDFS-7529-004.patch, HDFS-7529.000.patch, HDFS-7529.001.patch
>
>
> This jira proposes to consolidate encryption zone related implementation to a 
> single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903984#comment-14903984
 ] 

Hudson commented on HDFS-8733:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1165/])
HDFS-8733. Keep server related definition in hdfs.proto on server side. 
Contributed by Mingliang Liu. (wheat9: rev 
7c5c099324d9168114be2f1233d49fdb65a8c1f2)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/JournalProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/proto/bkjournal.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/QJournalProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto


> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903983#comment-14903983
 ] 

Hudson commented on HDFS-9039:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1165 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1165/])
HDFS-9039. Separate client and server side methods of 
o.a.h.hdfs.NameNodeProxies. Contributed by Mingliang Liu. (wheat9: rev 
63d9f1596c92206cce3b72e3214d2fb5f6242b90)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java


> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8053) Move DFSIn/OutputStream and related classes to hadoop-hdfs-client

2015-09-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8053:

Attachment: HDFS-8053.000.patch

The v0 patch is the first effort of addressing this jira. As many classes 
connect to each other tightly, it is hard to move them separately. However, it 
is still possible to further split this patch, e.g. to fix the findbugs 
warnings in another jira, to move several static helper methods up front, etc.

Comments welcomed.

> Move DFSIn/OutputStream and related classes to hadoop-hdfs-client
> -
>
> Key: HDFS-8053
> URL: https://issues.apache.org/jira/browse/HDFS-8053
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-8053.000.patch
>
>
> This jira tracks the effort of moving the {{DFSInputStream}} and 
> {{DFSOutputSream}} classes from {{hadoop-hdfs}} to {{hadoop-hdfs-client}} 
> module.
> Guidelines:
> * As the {{DFSClient}} is heavily coupled to these two classes, we should 
> move it together.
> * Related classes should be addressed in separate jiras if they're 
> independent and complex enough.
> * The checkstyle warnings can be addressed in [HDFS-8979 | 
> https://issues.apache.org/jira/browse/HDFS-8979]
> * Removing the _slf4j_ logger guards when calling {{LOG.debug()}} and 
> {{LOG.trace()}} can be addressed in [HDFS-8971 | 
> https://issues.apache.org/jira/browse/HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-22 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9040:

Description: 
The general idea is to simplify error handling logic.

-Proposal 1:-
-A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
StripedDataStreamer s only have to stream blocks to DNs.-

Proposal 2:
See below the 
[comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
 from [~jingzhao].

  was:
The general idea is to simplify error handling logic.

Proposal 1:
A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
StripedDataStreamer s only have to stream blocks to DNs.

Proposal 2:
See below the 
[comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
 from [~jingzhao].


> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Jing Zhao
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040-HDFS-7285.004.patch, 
> HDFS-9040.00.patch, HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> -Proposal 1:-
> -A BlockGroupDataStreamer to communicate with NN to allocate/update block, 
> and StripedDataStreamer s only have to stream blocks to DNs.-
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-09-22 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903897#comment-14903897
 ] 

Zhe Zhang commented on HDFS-7285:
-

The Jenkins [job | https://builds.apache.org/job/Hadoop-HDFS-7285-Merge/] is 
not showing any new failed tests. I just updated the feature branch with latest 
trunk changes. It was a force push because HDFS-8920 was committed while I test 
the git merge result locally. So I just cherry-picked HDFS-8920; there was no 
conflict.

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903912#comment-14903912
 ] 

Hadoop QA commented on HDFS-9123:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 27s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 20s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 14s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 163m 42s | Tests passed in hadoop-hdfs. 
|
| | | 229m 44s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761764/HDFS-9123.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cc2b473 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12613/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12613/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12613/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12613/console |


This message was automatically generated.

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8733:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Status: Patch Available  (was: Open)

My patch did not touch IPC at all. I ran all tests locally but did not see the 
same failure again.

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch, HDFS-9123.002.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8733) Keep server related definition in hdfs.proto on server side

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903968#comment-14903968
 ] 

Hudson commented on HDFS-8733:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2371/])
HDFS-8733. Keep server related definition in hdfs.proto on server side. 
Contributed by Mingliang Liu. (wheat9: rev 
7c5c099324d9168114be2f1233d49fdb65a8c1f2)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/proto/bkjournal.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/JournalProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/QJournalProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/InterDatanodeProtocol.proto


> Keep server related definition in hdfs.proto on server side
> ---
>
> Key: HDFS-8733
> URL: https://issues.apache.org/jira/browse/HDFS-8733
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HFDS-8733.000.patch
>
>
> In [HDFS-8726], we moved the protobuf files that define the client-sever 
> protocols to {{hadoop-hdfs-client}} module. In {{hdfs.proto}} , there are 
> some server related definition. This jira tracks the effort of moving those 
> server related definition back to {{hadoop-hdfs}} module. A good place may be 
> a new file named {{HdfsServer.proto}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9039) Separate client and server side methods of o.a.h.hdfs.NameNodeProxies

2015-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903967#comment-14903967
 ] 

Hudson commented on HDFS-9039:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2371/])
HDFS-9039. Separate client and server side methods of 
o.a.h.hdfs.NameNodeProxies. Contributed by Mingliang Liu. (wheat9: rev 
63d9f1596c92206cce3b72e3214d2fb5f6242b90)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/WrappedFailoverProxyProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolPB.java


> Separate client and server side methods of o.a.h.hdfs.NameNodeProxies
> -
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9108) InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903575#comment-14903575
 ] 

Haohui Mai commented on HDFS-9108:
--

I'm able to reproduce the issue by just using the main thread to run the 
{{io_service}}. My guess is that in the test the buffer is at the top of the 
stack which happens to be valid all the time.

> InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers
> ---
>
> Key: HDFS-9108
> URL: https://issues.apache.org/jira/browse/HDFS-9108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: Ubuntu x86_64, gcc 4.8.2
>Reporter: James Clampffer
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: 9108-async-repro.patch, 9108-async-repro.patch1, 
> HDFS-9108.000.patch
>
>
> Somewhere between InputStream->PositionRead and the asio code the pointer to 
> the destination buffer gets lost.  PositionRead will correctly return the 
> number of bytes read but the buffer won't be filled.
> This only seems to effect the remote_block_reader, RPC calls are working.
> Valgrind error:
> Syscall param recvmsg(msg.msg_iov) points to uninitialised byte(s)
> msg.msg_iov[0] should equal the buffer pointer passed to PositionRead
> Hit when using a promise to make the async call block until completion. 
> auto stat = std::make_shared();
> std::future future(stat->get_future());
> size_t readCount = 0;
> auto h = [stat, ,buf](const Status , size_t bytes) {
>   stat->set_value(s);
>   readCount = bytes;
> };
> char buf[50];
> inputStream->PositionRead(buf, 50, 0, h);
>   
> //wait for async to finish
> future.get();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9004) Add upgrade domain to DatanodeInfo

2015-09-22 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903600#comment-14903600
 ] 

Ming Ma commented on HDFS-9004:
---

Thanks [~eddyxu],  [~shahrs87] and [~ctrezzo].

> Add upgrade domain to DatanodeInfo
> --
>
> Key: HDFS-9004
> URL: https://issues.apache.org/jira/browse/HDFS-9004
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9004-2.patch, HDFS-9004-3.patch, HDFS-9004.patch
>
>
> As part of upgrade domain feature, we first need to add upgrade domain string 
> to {{DatanodeInfo}}. It includes things like:
> * Add a new field to DatanodeInfo.
> * Modify protobuf for DatanodeInfo.
> * Update DatanodeInfo.getDatanodeReport to include upgrade domain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Attachment: (was: HDFS-9123.001.patch)

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-22 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Attachment: HDFS-9040-HDFS-7285.004.patch

Update the patch based on our discussion. The main changes compared with 003 
version:
# Flush out all the enqueued data to DataNodes before handling failures and 
bumping GS
# Instead of let each DataStreamer write their own last empty packet of the 
block, we do it in the StripedOutputStream level so that we can still bump GS 
for failure handling before some streamers close their internal blocks.
# During the test I found that some data streamer may take a long time to 
close/create datanode connections. This may cause other streamers' connections 
timeout. Thus the new patch adds an upper bound for the total waiting time of 
creating datanode connections during failure handling.

A big missing part is test. We need to add a lot more tests to cover all 
different scenarios. Maybe we can use HDFS-9098 to do it.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Jing Zhao
> Attachments: HDFS-9040-HDFS-7285.002.patch, 
> HDFS-9040-HDFS-7285.003.patch, HDFS-9040-HDFS-7285.004.patch, 
> HDFS-9040.00.patch, HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Attachment: HDFS-9123.001.patch

My first patch, along with a test case

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9039) Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client and hadoop-hdfs modules respectively

2015-09-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9039:

Attachment: HDFS-9039.002.patch

Thanks [~wheat9] for your review.

The findbugs warning is caused by the {{Unread public/protected field: 
org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.fallbackToSimpleAuth}}.
 As we moved this abstract class to {{hadoop-hdfs-client}} module while keeping 
the {{ConfiguredFailoverProxyProvider}} class that extends it stay in 
{{hadoop-hdfs}} module, it is better to make the field private in 
{{AbstractNNFailoverProxyProvider}}, along with public getter access. The v2 
patch addresses this findbugs warning.

> Split o.a.h.hdfs.NameNodeProxies class into two classes in hadoop-hdfs-client 
> and hadoop-hdfs modules respectively
> --
>
> Key: HDFS-9039
> URL: https://issues.apache.org/jira/browse/HDFS-9039
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9039.000.patch, HDFS-9039.001.patch, 
> HDFS-9039.002.patch
>
>
> Currently the {{org.apache.hadoop.hdfs.NameNodeProxies}} class is used by 
> both {{org.apache.hadoop.hdfs.server}} package (for server side protocols) 
> and {{DFSClient}} class (for {{ClientProtocol}}). The {{DFSClient}} class 
> should be moved to {{hadoop-hdfs-client}} module (see [HDFS-8053 | 
> https://issues.apache.org/jira/browse/HDFS-8053]). As the 
> {{org.apache.hadoop.hdfs.NameNodeProxies}} class also depends on server side 
> protocols (e.g. {{JournalProtocol}} and {{NamenodeProtocol}}), we can't 
> simply move this class to the {{hadoo-hdfs-client}} module as well.
> This jira tracks the effort of moving {{ClientProtocol}} related static 
> methods in {{org.apache.hadoop.hdfs.NameNodeProxies}} class to 
> {{hadoo-hdfs-client}} module. A good place to put these static methods is a 
> new class named {{NameNodeProxiesClient}}.
> The checkstyle warnings can be addressed in [HDFS-8979], and removing the 
> _slf4j_ logger guards when calling {{LOG.debug()}} and {{LOG.trace()}} can be 
> addressed in [HDFS-8971].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9080) update htrace version to 4.0

2015-09-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Attachment: HDFS-9080.010.patch

address latest comments

> update htrace version to 4.0
> 
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch, HDFS-9080.002.patch, 
> HDFS-9080.003.patch, HDFS-9080.004.patch, HDFS-9080.005.patch, 
> HDFS-9080.006.patch, HDFS-9080.007.patch, HDFS-9080.009.patch, 
> HDFS-9080.010.patch, tracing-fsshell-put.png
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9123) Validation of a path ended with a '/'

2015-09-22 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9123:
--
Status: Patch Available  (was: Open)

Fixed the buy by checking to see if the path is ended with a separator ('/').

> Validation of a path ended with a '/'
> -
>
> Key: HDFS-9123
> URL: https://issues.apache.org/jira/browse/HDFS-9123
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9123.001.patch
>
>
> HDFS forbids copying from a directory to its subdirectory (e.g. hdfs dfs -cp 
> /abc /abc/xyz) as otherwise it could cause infinite copying (/abc/xyz/xyz, 
> /abc/xyz/xyz, /abc/xyz/xyz/xyz,... etc)
> However, if the source path is ended with a '/' path separator, the existing 
> validation for sub-directories fails. For example, copying from / to /abc 
> would cause infinite copying, until the disk space is filled up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9108) InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers

2015-09-22 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903649#comment-14903649
 ] 

Haohui Mai commented on HDFS-9108:
--

I used your reproducer and called IoService::Run() directly. Your explanation 
makes sense.

> InputStreamImpl::ReadBlockContinuation stores wrong pointers of buffers
> ---
>
> Key: HDFS-9108
> URL: https://issues.apache.org/jira/browse/HDFS-9108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
> Environment: Ubuntu x86_64, gcc 4.8.2
>Reporter: James Clampffer
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: 9108-async-repro.patch, 9108-async-repro.patch1, 
> HDFS-9108.000.patch
>
>
> Somewhere between InputStream->PositionRead and the asio code the pointer to 
> the destination buffer gets lost.  PositionRead will correctly return the 
> number of bytes read but the buffer won't be filled.
> This only seems to effect the remote_block_reader, RPC calls are working.
> Valgrind error:
> Syscall param recvmsg(msg.msg_iov) points to uninitialised byte(s)
> msg.msg_iov[0] should equal the buffer pointer passed to PositionRead
> Hit when using a promise to make the async call block until completion. 
> auto stat = std::make_shared();
> std::future future(stat->get_future());
> size_t readCount = 0;
> auto h = [stat, ,buf](const Status , size_t bytes) {
>   stat->set_value(s);
>   readCount = bytes;
> };
> char buf[50];
> inputStream->PositionRead(buf, 50, 0, h);
>   
> //wait for async to finish
> future.get();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >