[jira] [Commented] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560184#comment-14560184
 ] 

Colin Patrick McCabe commented on HDFS-7929:


The time needed to copy over the edit logs can cause performance problems in 
some clusters.  HDFS-8480 will fix this by using hardlinking instead.

 inotify unable fetch pre-upgrade edit log segments once upgrade starts
 --

 Key: HDFS-7929
 URL: https://issues.apache.org/jira/browse/HDFS-7929
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 2.7.0

 Attachments: HDFS-7929-000.patch, HDFS-7929-001.patch, 
 HDFS-7929-002.patch, HDFS-7929-003.patch


 inotify is often used to periodically poll HDFS events. However, once an HDFS 
 upgrade has started, edit logs are moved to /previous on the NN, which is not 
 accessible. Moreover, once the upgrade is finalized /previous is currently 
 lost forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560236#comment-14560236
 ] 

Hadoop QA commented on HDFS-8479:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 34s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 16s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 172m 22s | Tests failed in hadoop-hdfs. |
| | | 213m 38s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735404/HDFS-8479-HDFS-7285.0.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c9e0268 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11132/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11132/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11132/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11132/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11132/console |


This message was automatically generated.

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8482) Merge BlockInfo from HDFS-7285 branch to trunk

2015-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8482 started by Zhe Zhang.
---
 Merge BlockInfo from HDFS-7285 branch to trunk
 --

 Key: HDFS-8482
 URL: https://issues.apache.org/jira/browse/HDFS-8482
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Per offline discussion with [~andrew.wang], we should probably shrink the 
 size of the consolidated HDFS-7285 patch by merging some mechanical changes 
 that are unrelated to EC-specific logic to trunk first. Those include 
 renames, refactors for subclassing purpose, and so forth. This JIRA 
 specifically aims to merge {{BlockInfo}} back into trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-26 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560258#comment-14560258
 ] 

Walter Su commented on HDFS-8453:
-

I didn't see any difference between {{refreshLocatedBlock}} and old version's 
{{getBlockAt}}
This is old version
{code}
protected LocatedBlock getBlockAt(long blkStartOffset) 
{code}
This is new version
{code}
protected LocatedBlock refreshLocatedBlock(LocatedBlock block) {
  LocatedBlock lb = getBlockGroupAt(block.getStartOffset());
{code}
You didn't pass blkStartOffset, you get blkStartOffset from inside. So there is 
no difference.
This line really matters. It resolves the issue.
{code}
+  bg.getStartOffset(), bg.isCorrupt(), null);
{code}
The solution is to make it meaningless.

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {code}
   void actualGetFromOneDataNode(final DNAddrPair datanode,
 ...
   LocatedBlock block = getBlockAt(blockStartOffset);
 ...
   fetchBlockAt(block.getStartOffset());
 {code}
 The {{blockStartOffset}} here is from inner block. For parity blocks, the 
 offset will overlap with the next block group, and we may end up with 
 fetching wrong block. So we have to assign a meaningful start offset for 
 internal blocks in a block group, especially for parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds for codec calculation

2015-05-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560267#comment-14560267
 ] 

Kai Zheng commented on HDFS-8481:
-

Thanks Zhe for the follow-on. Would this take care of both client and datanode? 
[~hitliuyi] do we have existing related issue in datanode side for the 
consideration? Thanks.

 Erasure coding: remove workarounds for codec calculation
 

 Key: HDFS-8481
 URL: https://issues.apache.org/jira/browse/HDFS-8481
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 After HADOOP-11847 and related fixes, we should be able to properly calculate 
 decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8429) The DomainSocketWatcher thread should not block other threads if it dies

2015-05-26 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560303#comment-14560303
 ] 

zhouyingchao commented on HDFS-8429:


Colin, thank you for pointing out this issue.  I've changed and uploaded the 
patch accordingly.

 The DomainSocketWatcher thread should not block other threads if it dies
 

 Key: HDFS-8429
 URL: https://issues.apache.org/jira/browse/HDFS-8429
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
 HDFS-8429-003.patch


 In our cluster, an application is hung when doing a short circuit read of 
 local hdfs block. By looking into the log, we found the DataNode's 
 DomainSocketWatcher.watcherThread has exited with following log:
 {code}
 ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
 Thread[Thread-25,5,main] terminating on unexpected exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 The line 463 is following code snippet:
 {code}
  try {
 for (int fd : fdSet.getAndClearReadableFds()) {
   sendCallbackAndRemove(getAndClearReadableFds, entries, fdSet,
 fd);
 }
 {code}
 getAndClearReadableFds is a native method which will malloc an int array. 
 Since our memory is very tight, it looks like the malloc failed and a NULL 
 pointer is returned.
 The bad thing is that other threads then blocked in stack like this:
 {code}
 DataXceiver for client 
 unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
 operation #1] daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
 condition [0x7f09b9856000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x0007b0174808 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 IMO, we should exit the DN so that the users can know that something go  
 wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds for codec calculation

2015-05-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560318#comment-14560318
 ] 

Zhe Zhang commented on HDFS-8481:
-

Yes this aims to take care of both client and DN sides.

 Erasure coding: remove workarounds for codec calculation
 

 Key: HDFS-8481
 URL: https://issues.apache.org/jira/browse/HDFS-8481
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 After HADOOP-11847 and related fixes, we should be able to properly calculate 
 decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8467) [HDFS-Quota]Quota is getting updated after storage policy is modified even before mover command is executed.

2015-05-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8467:
-
Labels: QBST  (was: )

 [HDFS-Quota]Quota is getting updated after storage policy is modified even 
 before mover command is executed.
 

 Key: HDFS-8467
 URL: https://issues.apache.org/jira/browse/HDFS-8467
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: surendra singh lilhore
  Labels: QBST

 a. create a directory 
 {code}
 ./hdfs dfs -mkdir /d1
 {code}
 b. Set storage policy HOT on /d1
 {code}
 ./hdfs storagepolicies -setStoragePolicy -path /d1 -policy HOT
 {code}
 c. Set space quota to disk on /d1
 {code}
   ./hdfs dfsadmin -setSpaceQuota 1 -storageType DISK /d1
 {code}
 {code} 
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 9.8 K  none   inf  none 
   inf /d1
 {code}
 d. Insert 2 file each of 1000B
 {code}
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 3.9 K  none   inf  none 
   inf /d1
 {code}
 e. Set ARCHIVE quota on /d1
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1 -storageType ARCHIVE /d1
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 3.9 K  none   inf 9.8 K 
 9.8 K /d1
 {code}
 f. Change storagepilicy to COLD
 {code}
 ./hdfs storagepolicies -setStoragePolicy -path /d1 -policy COLD
 {code}
 g. Check REM_ARCHIVE_QUOTA Value
 {code}
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 9.8 K  none   inf 9.8 K 
 3.9 K /d1
 {code}
 Here even when 'Mover' command is not run, quota of REM_ARCHIVE_QUOTA is 
 reduced and REM_DISK_QUOTA is increased.
 Expected : After Mover is success quota values has to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-8465.
--
Resolution: Not A Problem

Thanks [~szetszwo] for providing the context information about Mover. 
[~archanat], I'm resolving this issue as by design. Let me know if you think it 
differently.

 Mover is success even when space exceeds storage quota.
 ---

 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover, namenode
Affects Versions: 2.7.0
Reporter: Archana T
Assignee: surendra singh lilhore
  Labels: QBST

 *Steps :*
 1. Create directory /dir 
 2. Set its storage policy to HOT --
 hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT
 3. Insert files of total size 10,000B  into /dir.
 4. Set above path /dir ARCHIVE type quota to 5,000B --
 hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
 4.9 K /dir
 {code}
 5. Now change policy of '/dir' to COLD
 6. Execute Mover command
 *Observations:*
 1. Mover is successful moving all 10,000B to ARCHIVE datapath.
 2. Count command displays negative value '-59.4K'--
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
   -59.4 K /dir
 {code}
 *Expected:*
 Mover should not be successful as ARCHIVE quota is only 5,000B.
 Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8465:
-
Labels: QBST  (was: )

 Mover is success even when space exceeds storage quota.
 ---

 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover, namenode
Affects Versions: 2.7.0
Reporter: Archana T
Assignee: surendra singh lilhore
  Labels: QBST

 *Steps :*
 1. Create directory /dir 
 2. Set its storage policy to HOT --
 hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT
 3. Insert files of total size 10,000B  into /dir.
 4. Set above path /dir ARCHIVE type quota to 5,000B --
 hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
 4.9 K /dir
 {code}
 5. Now change policy of '/dir' to COLD
 6. Execute Mover command
 *Observations:*
 1. Mover is successful moving all 10,000B to ARCHIVE datapath.
 2. Count command displays negative value '-59.4K'--
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
   -59.4 K /dir
 {code}
 *Expected:*
 Mover should not be successful as ARCHIVE quota is only 5,000B.
 Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7609) startup used too much time to load edits

2015-05-26 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7609:
--
Attachment: HDFS-7609-2.patch

[~jingzhao] you are right. It is possible that nn starts the transition right 
after the retry cache check. Here is the updated patch to cover that scenario. 
Thanks. BTW, any ideas why the current implementation sets NN to active at the 
beginning of the transition instead of at the end?

 startup used too much time to load edits
 

 Key: HDFS-7609
 URL: https://issues.apache.org/jira/browse/HDFS-7609
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.2.0
Reporter: Carrey Zhan
Assignee: Ming Ma
  Labels: BB2015-05-RFC
 Attachments: HDFS-7609-2.patch, 
 HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
 recovery_do_not_use_retrycache.patch


 One day my namenode crashed because of two journal node timed out at the same 
 time under very high load, leaving behind about 100 million transactions in 
 edits log.(I still have no idea why they were not rolled into fsimage.)
 I tryed to restart namenode, but it showed that almost 20 hours would be 
 needed before finish, and it was loading fsedits most of the time. I also 
 tryed to restart namenode in recover mode, the loading speed had no different.
 I looked into the stack trace, judged that it is caused by the retry cache. 
 So I set dfs.namenode.enable.retrycache to false, the restart process 
 finished in half an hour.
 I think the retry cached is useless during startup, at least during recover 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8482) Merge BlockInfo from HDFS-7285 branch to trunk

2015-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8482:

Affects Version/s: 2.7.0

 Merge BlockInfo from HDFS-7285 branch to trunk
 --

 Key: HDFS-8482
 URL: https://issues.apache.org/jira/browse/HDFS-8482
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Per offline discussion with [~andrew.wang], we should probably shrink the 
 size of the consolidated HDFS-7285 patch by merging some mechanical changes 
 that are unrelated to EC-specific logic to trunk first. Those include 
 renames, refactors for subclassing purpose, and so forth. This JIRA 
 specifically aims to merge {{BlockInfo}} back into trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560315#comment-14560315
 ] 

Hadoop QA commented on HDFS-8306:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 15s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  4s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 28s | Tests failed in hadoop-hdfs. |
| | | 205m 40s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735440/HDFS-8306.debug1.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11133/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11133/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11133/console |


This message was automatically generated.

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch, HDFS-8306.debug0.patch, HDFS-8306.debug1.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-05-26 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560150#comment-14560150
 ] 

Kai Sasaki commented on HDFS-8062:
--

[~drankye] Thank you for reviewing! 
Because HDFS-7285 is often force updated, it was always a little tough work to 
rebase and check it. So I want to update patch altogether.  Is there any other 
points we have to fix for current patch?

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062-HDFS-7285-07.patch, 
 HDFS-8062-HDFS-7285-08.patch, HDFS-8062.1.patch, HDFS-8062.2.patch, 
 HDFS-8062.3.patch, HDFS-8062.4.patch, HDFS-8062.5.patch, HDFS-8062.6.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8467) [HDFS-Quota]Quota is getting updated after storage policy is modified even before mover command is executed.

2015-05-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-8467.
--
Resolution: Not A Problem

I'm resolving this issue as by design. Let me know if you think it differently.

 [HDFS-Quota]Quota is getting updated after storage policy is modified even 
 before mover command is executed.
 

 Key: HDFS-8467
 URL: https://issues.apache.org/jira/browse/HDFS-8467
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: surendra singh lilhore
  Labels: QBST

 a. create a directory 
 {code}
 ./hdfs dfs -mkdir /d1
 {code}
 b. Set storage policy HOT on /d1
 {code}
 ./hdfs storagepolicies -setStoragePolicy -path /d1 -policy HOT
 {code}
 c. Set space quota to disk on /d1
 {code}
   ./hdfs dfsadmin -setSpaceQuota 1 -storageType DISK /d1
 {code}
 {code} 
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 9.8 K  none   inf  none 
   inf /d1
 {code}
 d. Insert 2 file each of 1000B
 {code}
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 3.9 K  none   inf  none 
   inf /d1
 {code}
 e. Set ARCHIVE quota on /d1
 {code}
 ./hdfs dfsadmin -setSpaceQuota 1 -storageType ARCHIVE /d1
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 3.9 K  none   inf 9.8 K 
 9.8 K /d1
 {code}
 f. Change storagepilicy to COLD
 {code}
 ./hdfs storagepolicies -setStoragePolicy -path /d1 -policy COLD
 {code}
 g. Check REM_ARCHIVE_QUOTA Value
 {code}
 ./hdfs dfs -count -v -q -h -t  /d1
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
 9.8 K 9.8 K  none   inf 9.8 K 
 3.9 K /d1
 {code}
 Here even when 'Mover' command is not run, quota of REM_ARCHIVE_QUOTA is 
 reduced and REM_DISK_QUOTA is increased.
 Expected : After Mover is success quota values has to be changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8482) Merge BlockInfo from HDFS-7285 branch to trunk

2015-05-26 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8482:
---

 Summary: Merge BlockInfo from HDFS-7285 branch to trunk
 Key: HDFS-8482
 URL: https://issues.apache.org/jira/browse/HDFS-8482
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Per offline discussion with [~andrew.wang], we should probably shrink the size 
of the consolidated HDFS-7285 patch by merging some mechanical changes that are 
unrelated to EC-specific logic to trunk first. Those include renames, refactors 
for subclassing purpose, and so forth. This JIRA specifically aims to merge 
{{BlockInfo}} back into trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8407) libhdfs hdfsListDirectory() API has different behavior than documentation

2015-05-26 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560270#comment-14560270
 ] 

Masatake Iwasaki commented on HDFS-8407:


The test failures seem not to be related to the patch. TestBalancer and 
TestEditLog succeeds on my local environment.

 libhdfs hdfsListDirectory() API has different behavior than documentation
 -

 Key: HDFS-8407
 URL: https://issues.apache.org/jira/browse/HDFS-8407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Juan Yu
Assignee: Masatake Iwasaki
 Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
 HDFS-8407.003.patch


 The documentation says it returns NULL on error, but it could also return 
 NULL when the directory is empty.
 /** 
  * hdfsListDirectory - Get list of files/directories for a given
  * directory-path. hdfsFreeFileInfo should be called to deallocate 
 memory. 
  * @param fs The configured filesystem handle.
  * @param path The path of the directory. 
  * @param numEntries Set to the number of files/directories in path.
  * @return Returns a dynamically-allocated array of hdfsFileInfo
  * objects; NULL on error.
  */
 {code}
 hdfsFileInfo *pathList = NULL; 
 ...
 //Figure out the number of entries in that directory
 jPathListSize = (*env)-GetArrayLength(env, jPathList);
 if (jPathListSize == 0) {
 ret = 0;
 goto done;
 }
 ...
 if (ret) {
 hdfsFreeFileInfo(pathList, jPathListSize);
 errno = ret;
 return NULL;
 }
 *numEntries = jPathListSize;
 return pathList;
 {code}
 Either change the implementation to match the doc, or fix the doc to match 
 the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8429) The DomainSocketWatcher thread should not block other threads if it dies

2015-05-26 Thread zhouyingchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HDFS-8429:
---
Attachment: HDFS-8429-003.patch

Tested cases include TestParallelShortCircuitLegacyRead, 
TestParallelShortCircuitRead, TestParallelShortCircuitReadNoChecksum, 
TestParallelShortCircuitReadUnCached, TestShortCircuitCache, 
TestShortCircuitLocalRead, TestShortCircuitShm, TemporarySocketDirectory, 
TestDomainSocket, TestDomainSocketWatcher

 The DomainSocketWatcher thread should not block other threads if it dies
 

 Key: HDFS-8429
 URL: https://issues.apache.org/jira/browse/HDFS-8429
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
 HDFS-8429-003.patch


 In our cluster, an application is hung when doing a short circuit read of 
 local hdfs block. By looking into the log, we found the DataNode's 
 DomainSocketWatcher.watcherThread has exited with following log:
 {code}
 ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
 Thread[Thread-25,5,main] terminating on unexpected exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 The line 463 is following code snippet:
 {code}
  try {
 for (int fd : fdSet.getAndClearReadableFds()) {
   sendCallbackAndRemove(getAndClearReadableFds, entries, fdSet,
 fd);
 }
 {code}
 getAndClearReadableFds is a native method which will malloc an int array. 
 Since our memory is very tight, it looks like the malloc failed and a NULL 
 pointer is returned.
 The bad thing is that other threads then blocked in stack like this:
 {code}
 DataXceiver for client 
 unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
 operation #1] daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
 condition [0x7f09b9856000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x0007b0174808 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 IMO, we should exit the DN so that the users can know that something go  
 wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560362#comment-14560362
 ] 

Hadoop QA commented on HDFS-7609:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  1s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 41s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 29s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 164m 12s | Tests failed in hadoop-hdfs. |
| | | 230m  8s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735477/HDFS-7609-2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11135/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11135/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11135/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11135/console |


This message was automatically generated.

 startup used too much time to load edits
 

 Key: HDFS-7609
 URL: https://issues.apache.org/jira/browse/HDFS-7609
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.2.0
Reporter: Carrey Zhan
Assignee: Ming Ma
  Labels: BB2015-05-RFC
 Attachments: HDFS-7609-2.patch, 
 HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
 recovery_do_not_use_retrycache.patch


 One day my namenode crashed because of two journal node timed out at the same 
 time under very high load, leaving behind about 100 million transactions in 
 edits log.(I still have no idea why they were not rolled into fsimage.)
 I tryed to restart namenode, but it showed that almost 20 hours would be 
 needed before finish, and it was loading fsedits most of the time. I also 
 tryed to restart namenode in recover mode, the loading speed had no different.
 I looked into the stack trace, judged that it is caused by the retry cache. 
 So I set dfs.namenode.enable.retrycache to false, the restart process 
 finished in half an hour.
 I think the retry cached is useless during startup, at least during recover 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8409) HDFS client RPC call throws java.lang.IllegalStateException

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560345#comment-14560345
 ] 

Hadoop QA commented on HDFS-8409:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 49s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 22s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  4s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 162m 40s | Tests passed in hadoop-hdfs. 
|
| | | 228m 21s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735456/HDFS-8409.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11134/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11134/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11134/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11134/console |


This message was automatically generated.

 HDFS client RPC call throws java.lang.IllegalStateException
 -

 Key: HDFS-8409
 URL: https://issues.apache.org/jira/browse/HDFS-8409
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Juan Yu
Assignee: Juan Yu
 Attachments: HDFS-8409.001.patch, HDFS-8409.002.patch, 
 HDFS-8409.003.patch


 When the HDFS client RPC calls need to retry, it sometimes throws 
 java.lang.IllegalStateException and retry is aborted and cause the client 
 call will fail.
 {code}
 Caused by: java.lang.IllegalStateException
   at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:129)
   at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
   at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
 {code}
 Here is the check that throws exception
 {code}
   public static void setCallIdAndRetryCount(int cid, int rc) {
   ...
   Preconditions.checkState(callId.get() == null);
   }
 {code}
 The RetryInvocationHandler tries to call it with not null callId and causes 
 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available

2015-05-26 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560374#comment-14560374
 ] 

nijel commented on HDFS-8475:
-

hi [~Vinod08]
Looks like you have only one valid datanode assigned for this block it got 
failed. So write will fail.
bq. There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.

What you are suspecting as issue here ? 

 Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no 
 length prefix available
 

 Key: HDFS-8475
 URL: https://issues.apache.org/jira/browse/HDFS-8475
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Vinod Valecha
Priority: Blocker

 Scenraio:
 =
 write a file
 corrupt block manually
 Exception stack trace- 
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream 
 Exception in createBlockOutputStream
  java.io.EOFException: Premature EOF: no 
 length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning 
 BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 2015-05-24 02:31:55.299 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode 
 10.108.106.59:50010
 [5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Excluding datanode 10.108.106.59:50010
 2015-05-24 02:31:55.300 WARNING [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 [5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception
  
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
 
  

[jira] [Created] (HDFS-8483) Erasure coding: test DataNode reporting bad/corrupted blocks which belongs to a striped block.

2015-05-26 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-8483:
--

 Summary: Erasure coding: test DataNode reporting bad/corrupted 
blocks which belongs to a striped block.
 Key: HDFS-8483
 URL: https://issues.apache.org/jira/browse/HDFS-8483
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: HDFS-7285


We can mimic one/several DataNode(s) reporting bad block(s) (which belong to a 
striped block) to the NameNode (through the DatanodeProtocol#reportBadBlocks 
call), and check if the recovery/invalidation work can be correctly scheduled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8483) Erasure coding: test DataNode reporting bad/corrupted blocks which belongs to a striped block.

2015-05-26 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8483:
---
Issue Type: Sub-task  (was: Test)
Parent: HDFS-7285

 Erasure coding: test DataNode reporting bad/corrupted blocks which belongs to 
 a striped block.
 --

 Key: HDFS-8483
 URL: https://issues.apache.org/jira/browse/HDFS-8483
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: HDFS-7285


 We can mimic one/several DataNode(s) reporting bad block(s) (which belong to 
 a striped block) to the NameNode (through the 
 DatanodeProtocol#reportBadBlocks call), and check if the 
 recovery/invalidation work can be correctly scheduled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8429) The DomainSocketWatcher thread should not block other threads if it dies

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560341#comment-14560341
 ] 

Hadoop QA commented on HDFS-8429:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  4s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  6s | The applied patch generated  1 
new checkstyle issues (total was 19, now 20). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 48s | Tests passed in 
hadoop-common. |
| | |  60m 41s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735500/HDFS-8429-003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11136/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11136/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11136/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11136/console |


This message was automatically generated.

 The DomainSocketWatcher thread should not block other threads if it dies
 

 Key: HDFS-8429
 URL: https://issues.apache.org/jira/browse/HDFS-8429
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
 HDFS-8429-003.patch


 In our cluster, an application is hung when doing a short circuit read of 
 local hdfs block. By looking into the log, we found the DataNode's 
 DomainSocketWatcher.watcherThread has exited with following log:
 {code}
 ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
 Thread[Thread-25,5,main] terminating on unexpected exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 The line 463 is following code snippet:
 {code}
  try {
 for (int fd : fdSet.getAndClearReadableFds()) {
   sendCallbackAndRemove(getAndClearReadableFds, entries, fdSet,
 fd);
 }
 {code}
 getAndClearReadableFds is a native method which will malloc an int array. 
 Since our memory is very tight, it looks like the malloc failed and a NULL 
 pointer is returned.
 The bad thing is that other threads then blocked in stack like this:
 {code}
 DataXceiver for client 
 unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
 operation #1] daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
 condition [0x7f09b9856000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x0007b0174808 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
 at 
 

[jira] [Commented] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available

2015-05-26 Thread Vinod Valecha (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560451#comment-14560451
 ] 

Vinod Valecha commented on HDFS-8475:
-

Hi nijel,

Thanks for replying. Can you pls tell me how many valid datanodes should be 
assigned for it to succeed?
There was a similar issue with PipelineForAppendOrRecovery. Then it was fixed 
in - https://issues.apache.org/jira/browse/HDFS-3384

Thanks.

 Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no 
 length prefix available
 

 Key: HDFS-8475
 URL: https://issues.apache.org/jira/browse/HDFS-8475
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Vinod Valecha
Priority: Blocker

 Scenraio:
 =
 write a file
 corrupt block manually
 Exception stack trace- 
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream 
 Exception in createBlockOutputStream
  java.io.EOFException: Premature EOF: no 
 length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning 
 BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 2015-05-24 02:31:55.299 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode 
 10.108.106.59:50010
 [5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Excluding datanode 10.108.106.59:50010
 2015-05-24 02:31:55.300 WARNING [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 [5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception
  
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
 at 
 

[jira] [Commented] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-26 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558739#comment-14558739
 ] 

surendra singh lilhore commented on HDFS-8465:
--

Thansks [~xyao] and [~szetszwo] for comments..

bq. Issue 1 - This is an interesting issue. Mover should not move blocks when 
storage types quota is exceeded.

So what is the conclusion, do we need to fix Issue 1??

 Mover is success even when space exceeds storage quota.
 ---

 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover, namenode
Affects Versions: 2.7.0
Reporter: Archana T
Assignee: surendra singh lilhore

 *Steps :*
 1. Create directory /dir 
 2. Set its storage policy to HOT --
 hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT
 3. Insert files of total size 10,000B  into /dir.
 4. Set above path /dir ARCHIVE type quota to 5,000B --
 hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
 4.9 K /dir
 {code}
 5. Now change policy of '/dir' to COLD
 6. Execute Mover command
 *Observations:*
 1. Mover is successful moving all 10,000B to ARCHIVE datapath.
 2. Count command displays negative value '-59.4K'--
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
   -59.4 K /dir
 {code}
 *Expected:*
 Mover should not be successful as ARCHIVE quota is only 5,000B.
 Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8399) Erasure Coding: BlockManager is unnecessarily computing recovery work for the deleted blocks

2015-05-26 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558732#comment-14558732
 ] 

Yi Liu commented on HDFS-8399:
--

Hi [~rakeshr], I just noticed your comments, sorry for late response, I will 
take a look at it, thanks.

 Erasure Coding: BlockManager is unnecessarily computing recovery work for the 
 deleted blocks
 

 Key: HDFS-8399
 URL: https://issues.apache.org/jira/browse/HDFS-8399
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8399-HDFS-7285-00.patch


 Following exception occurred in the {{ReplicationMonitor}}. As per the 
 initial analysis, I could see the exception is coming for the blocks of the 
 deleted file.
 {code}
 2015-05-14 14:14:40,485 FATAL util.ExitUtil (ExitUtil.java:terminate(127)) - 
 Terminate called
 org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError: 
 Absolute path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
   at java.lang.Thread.run(Thread.java:722)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
   at java.lang.Thread.run(Thread.java:722)
 Exception in thread 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@1255079
  org.apache.hadoop.util.ExitUtil$ExitException: java.lang.AssertionError: 
 Absolute path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:744)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:723)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath(FSDirectory.java:1655)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getECSchemaForPath(FSNamesystem.java:8435)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeRecoveryWorkForBlocks(BlockManager.java:1572)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockRecoveryWork(BlockManager.java:1402)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3894)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3846)
   at java.lang.Thread.run(Thread.java:722)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
   at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3865)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558786#comment-14558786
 ] 

Hadoop QA commented on HDFS-8453:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 55s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  4s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 57s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 169m 45s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 17s | Tests passed in 
hadoop-hdfs-client. |
| | | 212m 56s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocol.LocatedStripedBlock.getBlockIndices() may 
expose internal representation by returning LocatedStripedBlock.blockIndices  
At LocatedStripedBlock.java:by returning LocatedStripedBlock.blockIndices  At 
LocatedStripedBlock.java:[line 63] |
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.util.TestStripedBlockUtil |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735246/HDFS-8453-HDFS-7285.00.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / f56e192 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11126/console |


This message was automatically generated.

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-4977) Change Checkpoint Size of web ui of SecondaryNameNode

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HDFS-4977:
-

Assignee: kanaka kumar avvaru

 Change Checkpoint Size of web ui of SecondaryNameNode
 ---

 Key: HDFS-4977
 URL: https://issues.apache.org/jira/browse/HDFS-4977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Shinichi Yamashita
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch


 The checkpoint of SecondaryNameNode after 2.0 is carried out by 
 dfs.namenode.checkpoint.period and dfs.namenode.checkpoint.txns.
 Because Checkpoint Size displayed in status.jsp of SecondaryNameNode, it 
 shuold make modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5931) Potential bugs and improvements for exception handlers

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HDFS-5931:
--
Status: Open  (was: Patch Available)

Will update the patch for QA reported errors. [~d.yuan], please free to assign 
to you if you want to work on this JIRA

 Potential bugs and improvements for exception handlers
 --

 Key: HDFS-5931
 URL: https://issues.apache.org/jira/browse/HDFS-5931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.2.0
Reporter: Ding Yuan
Assignee: kanaka kumar avvaru
  Labels: BB2015-05-TBR
 Attachments: hdfs-5931-v2.patch, hdfs-5931-v3.patch, hdfs-5931.patch


 This is to report some improvements and potential bug fixes to some error 
 handling code. Also attaching a patch for review.
 Details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4977) Change Checkpoint Size of web ui of SecondaryNameNode

2015-05-26 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558877#comment-14558877
 ] 

kanaka kumar avvaru commented on HDFS-4977:
---

[~ sinchii] , I am planning handle the JIRA based on the current applicability. 
Please feel free to assign to you if you want to work on it.

 Change Checkpoint Size of web ui of SecondaryNameNode
 ---

 Key: HDFS-4977
 URL: https://issues.apache.org/jira/browse/HDFS-4977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Shinichi Yamashita
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: HDFS-4977-2.patch, HDFS-4977.patch, HDFS-4977.patch


 The checkpoint of SecondaryNameNode after 2.0 is carried out by 
 dfs.namenode.checkpoint.period and dfs.namenode.checkpoint.txns.
 Because Checkpoint Size displayed in status.jsp of SecondaryNameNode, it 
 shuold make modifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-5931) Potential bugs and improvements for exception handlers

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HDFS-5931:
-

Assignee: kanaka kumar avvaru

 Potential bugs and improvements for exception handlers
 --

 Key: HDFS-5931
 URL: https://issues.apache.org/jira/browse/HDFS-5931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.2.0
Reporter: Ding Yuan
Assignee: kanaka kumar avvaru
  Labels: BB2015-05-TBR
 Attachments: hdfs-5931-v2.patch, hdfs-5931-v3.patch, hdfs-5931.patch


 This is to report some improvements and potential bug fixes to some error 
 handling code. Also attaching a patch for review.
 Details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available

2015-05-26 Thread Vinod Valecha (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558897#comment-14558897
 ] 

Vinod Valecha commented on HDFS-8475:
-

Hi Team,

Any Idea when this will be assigned to a developer for the Fix.
Thanks.

 Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no 
 length prefix available
 

 Key: HDFS-8475
 URL: https://issues.apache.org/jira/browse/HDFS-8475
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Vinod Valecha
Priority: Blocker

 Scenraio:
 =
 write a file
 corrupt block manually
 Exception stack trace- 
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in 
 createBlockOutputStream
 java.io.EOFException: Premature EOF: no length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream 
 Exception in createBlockOutputStream
  java.io.EOFException: Premature EOF: no 
 length prefix available
 at 
 org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
 2015-05-24 02:31:55.291 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning 
 BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 [5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
 2015-05-24 02:31:55.299 INFO [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode 
 10.108.106.59:50010
 [5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
 Excluding datanode 10.108.106.59:50010
 2015-05-24 02:31:55.300 WARNING [T-33716795] 
 [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 [5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception
  
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
  could only be replicated to 0 nodes instead of minReplication (=1).  There 
 are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
 
   
 2015-05-24 02:31:55.301 WARNING [T-880] [E-AA380B730CF751508DC9163BAC8E4D1D] 
 [job:B94FEC9411E2C8563C842833D78142CF] 
 

[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558854#comment-14558854
 ] 

Akira AJISAKA commented on HDFS-5033:
-

Thanks [~d4rr3ll] for taking this issue.
First comment: Would you remove whitespace changes as follows from the patch?
{code}
--- 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -65,7 +65,7 @@
 
   /**
* convert an array of FileStatus to an array of Path
-   * 
+   *
* @param stats
*  an array of FileStatus objects
* @return an array of paths corresponding to the input
{code}

 Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
 to read the source
 ---

 Key: HDFS-5033
 URL: https://issues.apache.org/jira/browse/HDFS-5033
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Karthik Kambatla
Assignee: Darrell Taylor
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5033.001.patch


 fs -put/copyFromLocal shows a No such file or directory error when the user 
 doesn't have permissions to read the source file/directory. Saying 
 Permission Denied is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HDFS-8476:
-

Assignee: kanaka kumar avvaru

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor

 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8476:
-

 Summary: quota can't limit the file which put before setting the 
storage policy
 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor


1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Description: 
test steps:
1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT

In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach the 
directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here it 
success, and in step7 count shows remaining quota is -3GB

FYI, if change the turn of step3 and step4, then it turns out normal


  was:
1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT



 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor

 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8443) Document dfs.namenode.service.handler.count in hdfs-site.xml

2015-05-26 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8443:
-
Attachment: HDFS-8443.3.patch

Thanks [~ajisakaa] for reviewing. 
Updated the patch as per the comments.
Please review.

 Document dfs.namenode.service.handler.count in hdfs-site.xml
 

 Key: HDFS-8443
 URL: https://issues.apache.org/jira/browse/HDFS-8443
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: J.Andreina
 Attachments: HDFS-8443.1.patch, HDFS-8443.2.patch, HDFS-8443.3.patch


 When dfs.namenode.servicerpc-address is configured, NameNode launches an 
 extra RPC server to handle requests from non-client nodes. 
 dfs.namenode.service.handler.count specifies the number of threads for the 
 server but the parameter is not documented anywhere.
 I found a mail for asking about the parameter. 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201505.mbox/%3CE0D5A619-BDEA-44D2-81EB-C32B8464133D%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8319) Erasure Coding: support decoding for stateful read

2015-05-26 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558978#comment-14558978
 ] 

Walter Su commented on HDFS-8319:
-

1.
{code}
int len = (int) (rangeEndInBlockGroup - rangeStartInBlockGroup + 1);
{code}
it should be {{long}}. For pread the value maybe huge.

2.
{code}
cells[numCells - 1] = cells[0];
{code}
Can be removed.

3.
{code}
long cellStart = cell.idxInInternalBlk * cellSize + cell.offset;
{code}
evaluated value is {{int}}, casting is useless. Should be
{code}
long cellStart = 1L * cell.idxInInternalBlk * cellSize + cell.offset;
{code}

4.
{code}
for (AlignedStripe stripe : stripes) {
  // Parse group to get chosen DN location
  LocatedBlock[] blks = StripedBlockUtil.parseStripedBlockGroup(blockGroup,
cellSize, dataBlkNum, parityBlkNum);
{code}
should move it out of {{for}} scope.


5.
{code}
  if (alignedStripe.getSpanInBlock() == 0) {
return;
  }
{code}
Useless, should be removed. It'll never be 0. Use assertion instead to prevent 
programmatic mistake.

6.
{code}
  if (future != null) {
future.get();
{code}
We knew it's completed. So get() is useless.

7.
{code}
581   while (!futures.isEmpty()) {
...
593   if (r.state == StripingChunkReadResult.SUCCESSFUL) {
...
600   } else {
601 returnedChunk.state = StripingChunk.MISSING;
...
609 prepareDecodeInputs();
610 // TODO: close the failed block reader
611 for (int i = 0; i  alignedStripe.chunks.length; i++) {
612   StripingChunk chunk = alignedStripe.chunks[i];
613   Preconditions.checkNotNull(chunk);
614   if (chunk.state == StripingChunk.REQUESTED  i = 
dataBlkNum) {
615 readChunk(service, blocks[i], i, corruptedBlockMap);
616   }
617 }
{code}
We already have a data trunk missing, we should read parity trunk. So need to 
remove {{i=dataBlkNum}} in line 614.
More serious problem is, If the 3rd future failed, and the following futures 
are SUCCESSFUL. The following futures save data to {{curStripeBuf}} and do not 
copy again to {{decodeInputs}}. {{decodeInputs}} lost valuable data which could 
be used to decode.
The problem is, you only call {{prepareDecodeInputs}} *once*, and only copy 
{{curStripeBuf}} to {{decodeInputs}} *once*.

8.
{code}
  if (alignedStripe.missingChunksNum  0) {
decode();
  } 
{code}
Need to copy the decoded result back.

 Erasure Coding: support decoding for stateful read
 --

 Key: HDFS-8319
 URL: https://issues.apache.org/jira/browse/HDFS-8319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-8319.001.patch


 HDFS-7678 adds the decoding functionality for pread. This jira plans to add 
 decoding to stateful read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8234) DistributedFileSystem and Globber should apply PathFilter early

2015-05-26 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8234:
-
Attachment: HDFS-8234.2.patch

Attached an updated patch fixing the checkstyle comments.
Please review.

 DistributedFileSystem and Globber should apply PathFilter early
 ---

 Key: HDFS-8234
 URL: https://issues.apache.org/jira/browse/HDFS-8234
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Rohini Palaniswamy
Assignee: J.Andreina
  Labels: newbie
 Attachments: HDFS-8234.1.patch, HDFS-8234.2.patch


 HDFS-985 added partial listing in listStatus to avoid listing entries of 
 large directory in one go. If listStatus(Path p, PathFilter f) call is made, 
 filter is applied after fetching all the entries resulting in a big list 
 being constructed on the client side. If the 
 DistributedFileSystem.listStatusInternal() applied the PathFilter it would be 
 more efficient. So DistributedFileSystem should override listStatus(Path f, 
 PathFilter filter) and apply PathFilter early. 
 Globber.java also applies filter after calling listStatus.  It should call 
 listStatus with the PathFilter.
 {code}
 FileStatus[] children = listStatus(candidate.getPath());
.
 for (FileStatus child : children) {
   // Set the child path based on the parent path.
   child.setPath(new Path(candidate.getPath(),
   child.getPath().getName()));
   if (globFilter.accept(child.getPath())) {
 newCandidates.add(child);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8385) Remove unused entities (variables,methods and configurations)

2015-05-26 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8385:
-
Attachment: HDFS-8385.2.patch

Attached an updated patch.
Please review.

 Remove unused entities (variables,methods and configurations) 
 --

 Key: HDFS-8385
 URL: https://issues.apache.org/jira/browse/HDFS-8385
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8385.1.patch, HDFS-8385.2.patch


 Below are few unused constants , methods and configurations which can be 
 removed.
 *DfsClientConf*
   public ChecksumOpt getDefaultChecksumOpt() {
 return defaultChecksumOpt;
   }
 *HdfsServerConstants*
   long LEASE_RECOVER_PERIOD = 10 * 1000; // in ms
   public static final String  DFS_STREAM_BUFFER_SIZE_KEY = 
 dfs.stream-buffer-size;
   public static final int DFS_STREAM_BUFFER_SIZE_DEFAULT = 4096;
 *CommonConfigurationKeys*
   public static final String  FS_HOME_DIR_KEY = fs.homeDir;
   public static final String  FS_HOME_DIR_DEFAULT = /user;
   public static final String  IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY =
 io.compression.codec.lzo.buffersize;
   public static final String IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_KEY =
   io.compression.codec.snappy.buffersize;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8319) Erasure Coding: support decoding for stateful read

2015-05-26 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559029#comment-14559029
 ] 

Walter Su commented on HDFS-8319:
-

9. Pread create block reader at {{actualGetFromOneDataNode}}. It's a problem of 
PositionStripeReader version's {{readChunk()}}. It creates too much readers.

10.
For trunk branch's DFSInputStream:
stateful read may seek to next block. It uses BlockSeekTo() to handle error.
pread handle error at {{actualGetFromOneDataNode}}, it uses getBlockRange() to 
locate block.
The difference is that stateful read seek to next block while reading.

For EC branch's StripedDFSInputStream:
We have no such difference at {{readChunk()}}. You have two version of 
{{readChunk()}}. We only have to keep one of them.


 Erasure Coding: support decoding for stateful read
 --

 Key: HDFS-8319
 URL: https://issues.apache.org/jira/browse/HDFS-8319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-8319.001.patch


 HDFS-7678 adds the decoding functionality for pread. This jira plans to add 
 decoding to stateful read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8477) Missing ZKFC config keys can be added to hdfs-default.xml and HDFSHighAvailabilityWithQJM.md

2015-05-26 Thread kanaka kumar avvaru (JIRA)
kanaka kumar avvaru created HDFS-8477:
-

 Summary: Missing ZKFC config keys can be added to hdfs-default.xml 
and HDFSHighAvailabilityWithQJM.md
 Key: HDFS-8477
 URL: https://issues.apache.org/jira/browse/HDFS-8477
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: kanaka kumar avvaru
Assignee: kanaka kumar avvaru
Priority: Minor


1) {{dfs.ha.zkfc.port}} parameter details can be added to {{hdfs-default.xml}}

2) {{dfs.ha.zkfc.port}} and {{dfs.ha.zkfc.nn.http.timeout.ms}} configuration 
sample can be added to Configuring automatic failover section in 
{{HDFSHighAvailabilityWithQJM.md}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8385) Remove unused entities (variables,methods and configurations)

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559034#comment-14559034
 ] 

Hadoop QA commented on HDFS-8385:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735308/HDFS-8385.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9a3d617 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11129/console |


This message was automatically generated.

 Remove unused entities (variables,methods and configurations) 
 --

 Key: HDFS-8385
 URL: https://issues.apache.org/jira/browse/HDFS-8385
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina
 Attachments: HDFS-8385.1.patch, HDFS-8385.2.patch


 Below are few unused constants , methods and configurations which can be 
 removed.
 *DfsClientConf*
   public ChecksumOpt getDefaultChecksumOpt() {
 return defaultChecksumOpt;
   }
 *HdfsServerConstants*
   long LEASE_RECOVER_PERIOD = 10 * 1000; // in ms
   public static final String  DFS_STREAM_BUFFER_SIZE_KEY = 
 dfs.stream-buffer-size;
   public static final int DFS_STREAM_BUFFER_SIZE_DEFAULT = 4096;
 *CommonConfigurationKeys*
   public static final String  FS_HOME_DIR_KEY = fs.homeDir;
   public static final String  FS_HOME_DIR_DEFAULT = /user;
   public static final String  IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY =
 io.compression.codec.lzo.buffersize;
   public static final String IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_KEY =
   io.compression.codec.snappy.buffersize;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-26 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8453:

Description: 
{code}
  void actualGetFromOneDataNode(final DNAddrPair datanode,
...
  LocatedBlock block = getBlockAt(blockStartOffset);
...
  fetchBlockAt(block.getStartOffset());
{code}
The {{blockStartOffset}} here is from inner block. For parity blocks, the 
offset will overlap with the next block group, and we may end up with fetching 
wrong block. So we have to assign a meaningful start offset for internal blocks 
in a block group, especially for parity blocks.

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {code}
   void actualGetFromOneDataNode(final DNAddrPair datanode,
 ...
   LocatedBlock block = getBlockAt(blockStartOffset);
 ...
   fetchBlockAt(block.getStartOffset());
 {code}
 The {{blockStartOffset}} here is from inner block. For parity blocks, the 
 offset will overlap with the next block group, and we may end up with 
 fetching wrong block. So we have to assign a meaningful start offset for 
 internal blocks in a block group, especially for parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-26 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559096#comment-14559096
 ] 

Walter Su commented on HDFS-8453:
-

{code}
+  bg.getStartOffset(), bg.isCorrupt(), null);
{code}
The 00 patch assign all offsets of inner blocks with {{bg.getStartOffset()}}. I 
think it will be enough to solve the problem.

{{refreshLocatedBlock}} is good to me. It can solve {{getBlockAt}}. But you 
have to solve {{fetchBlockAt}} too.

bq. My current plan is to keep using bg.getStartOffset() + idxInBlockGroup * 
cellSize as the start offset for data blocks. For parity blocks, use -1 * 
(bg.getStartOffset() + idxInBlockGroup * cellSize).
If you gonna do that. You have to change 
{{DFSStripedInputStream.refreshLocatedBlock()}} to deal with negative offsets. 
So It won't seek to wrong block. Is it your plan in next patch? I'm +1 for this 
idea.

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {code}
   void actualGetFromOneDataNode(final DNAddrPair datanode,
 ...
   LocatedBlock block = getBlockAt(blockStartOffset);
 ...
   fetchBlockAt(block.getStartOffset());
 {code}
 The {{blockStartOffset}} here is from inner block. For parity blocks, the 
 offset will overlap with the next block group, and we may end up with 
 fetching wrong block. So we have to assign a meaningful start offset for 
 internal blocks in a block group, especially for parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559517#comment-14559517
 ] 

Allen Wittenauer commented on HDFS-8322:


Frankly, this seems like unnecessary complexity, but whatever.  Yes, I'll +0 if 
you lock it behind a config option that defaults to off. 

 Display warning if hadoop fs -ls is showing the local filesystem
 

 Key: HDFS-8322
 URL: https://issues.apache.org/jira/browse/HDFS-8322
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8322.000.patch


 Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
 -ls}}.
 This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
 the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8478) Improve Solaris support in HDFS

2015-05-26 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559595#comment-14559595
 ] 

Alan Burlison commented on HDFS-8478:
-

Solaris-related changes to HADOOP and HDFS are covered under the two top-level 
issues:

HADOOP-11985 Improve Solaris support in Hadoop
HDFS-8478 Improve Solaris support in HDFS

 Improve Solaris support in HDFS
 ---

 Key: HDFS-8478
 URL: https://issues.apache.org/jira/browse/HDFS-8478
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: build, native
Affects Versions: 2.7.0
 Environment: Solaris x86, Solaris sparc
Reporter: Alan Burlison
Assignee: Alan Burlison

 At present the HDFS native components aren't fully supported on Solaris 
 primarily due to differences between Linux and Solaris. This top-level task 
 will be used to group together both existing and new issues related to this 
 work. A second goal is to improve YARN performance and functionality on 
 Solaris wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8478) Improve Solaris support in HDFS

2015-05-26 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559597#comment-14559597
 ] 

Alan Burlison commented on HDFS-8478:
-

Solaris-related changes to HADOOP and YARN are covered under the two top-level 
issues:

HADOOP-11985 Improve Solaris support in Hadoop
YARN-3719 Improve Solaris support in YARN


 Improve Solaris support in HDFS
 ---

 Key: HDFS-8478
 URL: https://issues.apache.org/jira/browse/HDFS-8478
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: build, native
Affects Versions: 2.7.0
 Environment: Solaris x86, Solaris sparc
Reporter: Alan Burlison
Assignee: Alan Burlison

 At present the HDFS native components aren't fully supported on Solaris 
 primarily due to differences between Linux and Solaris. This top-level task 
 will be used to group together both existing and new issues related to this 
 work. A second goal is to improve YARN performance and functionality on 
 Solaris wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8319) Erasure Coding: support decoding for stateful read

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559508#comment-14559508
 ] 

Jing Zhao commented on HDFS-8319:
-

Thanks for the review, [~walter.k.su]. Replied your comments inline below:

bq. evaluated value is int, casting is useless. Should be long cellStart = 1L 
* cell.idxInInternalBlk * cellSize + cell.offset;

The casting is just moved to the calculation. So this change is unnecessary I 
guess.

bq. We knew it's completed. So get() is useless.

get() is necessary to get the ExecutionException during the runtime.

bq. So need to remove i=dataBlkNum in line 614.

I think this is a temporary workaround from HDFS-7678 according to Zhe's 
[comment|https://issues.apache.org/jira/browse/HDFS-7678?focusedCommentId=14535803page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14535803].
 I will add a TODO for this.

bq. you only call prepareDecodeInputs once, and only copy curStripeBuf to 
decodeInputs once

I think this was originally a bug in pread before the patch. The current patch 
tries to fix this issue by moving the data copy code to the {{decode}} function.

bq. Pread create block reader at actualGetFromOneDataNode. It's a problem of 
PositionStripeReader version's readChunk(). It creates too much readers.

This is a great catch. With the new AlignedStripe logic from HDFS-7678 we may 
create too many readers. We can create a separate jira to improve this.

bq. We only have to keep one of them.

To address the above improvement we may finally use the same readChunk function 
for stateful read and pread. But this jira mainly changes the stateful read 
code to make it unified with pread. Thus I guess we can do this along with the 
pread reader improvement in a separate jira.

bq. Need to copy the decoded result back.

The pread code already does this. For stateful read I thought I could avoid 
extra data copy by passing the slices of the curStripeBuf to the decode 
function. But looks like some decoder may change the input so extra data copy 
may be necessary. I will fix this later.

I will address all the other comments in my next patch. Thanks again for the 
review, Walter.

 Erasure Coding: support decoding for stateful read
 --

 Key: HDFS-8319
 URL: https://issues.apache.org/jira/browse/HDFS-8319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-8319.001.patch


 HDFS-7678 adds the decoding functionality for pread. This jira plans to add 
 decoding to stateful read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8478) Improve Solaris support in HDFS

2015-05-26 Thread Alan Burlison (JIRA)
Alan Burlison created HDFS-8478:
---

 Summary: Improve Solaris support in HDFS
 Key: HDFS-8478
 URL: https://issues.apache.org/jira/browse/HDFS-8478
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: build, native
Affects Versions: 2.7.0
 Environment: Solaris x86, Solaris sparc
Reporter: Alan Burlison
Assignee: Alan Burlison


At present the HDFS native components aren't fully supported on Solaris 
primarily due to differences between Linux and Solaris. This top-level task 
will be used to group together both existing and new issues related to this 
work. A second goal is to improve YARN performance and functionality on Solaris 
wherever possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-05-26 Thread Nate Edel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559776#comment-14559776
 ] 

Nate Edel commented on HDFS-8078:
-

Right now this was the only Java-code patch needed to get minimal IPv6 support 
on HDFS (I've been manually removing the IPv4-only command line flag; 
HADOOP-11630 is a nicer way of enabling it.)  Our goal is to run HBase on this, 
and I've found an unrelated problem with YARN on running 
IntegrationTestBigLinkedList on a multinode cluster -- I'd planned to open a 
separate JIRA on that once I have a patch.

 HDFS client gets errors trying to to connect to IPv6 DataNode
 -

 Key: HDFS-8078
 URL: https://issues.apache.org/jira/browse/HDFS-8078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.6.0
Reporter: Nate Edel
Assignee: Nate Edel
  Labels: BB2015-05-TBR, ipv6
 Attachments: HDFS-8078.9.patch


 1st exception, on put:
 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 2401:db00:1010:70ba:face:0:8:0:50010
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
 Appears to actually stem from code in DataNodeID which assumes it's safe to 
 append together (ipaddr + : + port) -- which is OK for IPv4 and not OK for 
 IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
 requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
 Currently using InetAddress.getByName() to validate IPv6 (guava 
 InetAddresses.forString has been flaky) but could also use our own parsing. 
 (From logging this, it seems like a low-enough frequency call that the extra 
 object creation shouldn't be problematic, and for me the slight risk of 
 passing in bad input that is not actually an IPv4 or IPv6 address and thus 
 calling an external DNS lookup is outweighed by getting the address 
 normalized and avoiding rewriting parsing.)
 Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
 ---
 2nd exception (on datanode)
 15/04/13 13:18:07 ERROR datanode.DataNode: 
 dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
 operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
 /2401:db00:11:d010:face:0:2f:0:50010
 java.io.EOFException
 at java.io.DataInputStream.readShort(DataInputStream.java:315)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
 at java.lang.Thread.run(Thread.java:745)
 Which also comes as client error -get: 2401 is not an IP string literal.
 This one has existing parsing logic which needs to shift to the last colon 
 rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
 rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7991) Allow users to skip checkpoint when stopping NameNode

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559813#comment-14559813
 ] 

Jing Zhao commented on HDFS-7991:
-

We can use this jira just to remove the original dfsadmin scripts and add a 
script hook as Allen did in his patch.

Allen, for your script patch, besides the secretshutdownhook is just a 
placeholder, looks like you have not handled the HADOOP_OPTS issue right?

 Allow users to skip checkpoint when stopping NameNode
 -

 Key: HDFS-7991
 URL: https://issues.apache.org/jira/browse/HDFS-7991
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7991-shellpart.patch, HDFS-7991.000.patch, 
 HDFS-7991.001.patch, HDFS-7991.002.patch, HDFS-7991.003.patch, 
 HDFS-7991.004.patch


 This is a follow-up jira of HDFS-6353. HDFS-6353 adds the functionality to 
 check if saving namespace is necessary before stopping namenode. As [~kihwal] 
 pointed out in this 
 [comment|https://issues.apache.org/jira/browse/HDFS-6353?focusedCommentId=14380898page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380898],
  in a secured cluster this new functionality requires the user to be kinit'ed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-26 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu reopened HDFS-8322:
-

Re-open to put warnings behind an optional configuration.

 Display warning if hadoop fs -ls is showing the local filesystem
 

 Key: HDFS-8322
 URL: https://issues.apache.org/jira/browse/HDFS-8322
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8322.000.patch


 Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
 -ls}}.
 This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
 the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8407) libhdfs hdfsListDirectory() API has different behavior than documentation

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559753#comment-14559753
 ] 

Colin Patrick McCabe commented on HDFS-8407:


+1 pending jenkins.  Thanks, [~iwasakims].

 libhdfs hdfsListDirectory() API has different behavior than documentation
 -

 Key: HDFS-8407
 URL: https://issues.apache.org/jira/browse/HDFS-8407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Juan Yu
Assignee: Masatake Iwasaki
 Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
 HDFS-8407.003.patch


 The documentation says it returns NULL on error, but it could also return 
 NULL when the directory is empty.
 /** 
  * hdfsListDirectory - Get list of files/directories for a given
  * directory-path. hdfsFreeFileInfo should be called to deallocate 
 memory. 
  * @param fs The configured filesystem handle.
  * @param path The path of the directory. 
  * @param numEntries Set to the number of files/directories in path.
  * @return Returns a dynamically-allocated array of hdfsFileInfo
  * objects; NULL on error.
  */
 {code}
 hdfsFileInfo *pathList = NULL; 
 ...
 //Figure out the number of entries in that directory
 jPathListSize = (*env)-GetArrayLength(env, jPathList);
 if (jPathListSize == 0) {
 ret = 0;
 goto done;
 }
 ...
 if (ret) {
 hdfsFreeFileInfo(pathList, jPathListSize);
 errno = ret;
 return NULL;
 }
 *numEntries = jPathListSize;
 return pathList;
 {code}
 Either change the implementation to match the doc, or fix the doc to match 
 the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8407) libhdfs hdfsListDirectory() API has different behavior than documentation

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8407:
---
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

 libhdfs hdfsListDirectory() API has different behavior than documentation
 -

 Key: HDFS-8407
 URL: https://issues.apache.org/jira/browse/HDFS-8407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Juan Yu
Assignee: Masatake Iwasaki
 Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
 HDFS-8407.003.patch


 The documentation says it returns NULL on error, but it could also return 
 NULL when the directory is empty.
 /** 
  * hdfsListDirectory - Get list of files/directories for a given
  * directory-path. hdfsFreeFileInfo should be called to deallocate 
 memory. 
  * @param fs The configured filesystem handle.
  * @param path The path of the directory. 
  * @param numEntries Set to the number of files/directories in path.
  * @return Returns a dynamically-allocated array of hdfsFileInfo
  * objects; NULL on error.
  */
 {code}
 hdfsFileInfo *pathList = NULL; 
 ...
 //Figure out the number of entries in that directory
 jPathListSize = (*env)-GetArrayLength(env, jPathList);
 if (jPathListSize == 0) {
 ret = 0;
 goto done;
 }
 ...
 if (ret) {
 hdfsFreeFileInfo(pathList, jPathListSize);
 errno = ret;
 return NULL;
 }
 *numEntries = jPathListSize;
 return pathList;
 {code}
 Either change the implementation to match the doc, or fix the doc to match 
 the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8429) Death of watcherThread making other local read blocked

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559778#comment-14559778
 ] 

Colin Patrick McCabe commented on HDFS-8429:


One thing that I am concerned about is that we could double-close something in 
toAdd if the code fails in this block:
{code}
if (!(toAdd.isEmpty()  toRemove.isEmpty())) {
  // Handle pending additions (before pending removes).
  for (IteratorEntry iter = toAdd.iterator(); iter.hasNext(); ) {
Entry entry = iter.next();
DomainSocket sock = entry.getDomainSocket();
Entry prevEntry = entries.put(sock.fd, entry);
Preconditions.checkState(prevEntry == null,
this + : tried to watch a file descriptor that we  +
were already watching:  + sock);
if (LOG.isTraceEnabled()) {
  LOG.trace(this + : adding fd  + sock.fd);
}
fdSet.add(sock.fd);
iter.remove();
  }
{code}

to prevent that, let's move the iter.remove() in this block up to right after 
the {{Entry entry = iter.next();}}.

+1 once that's done

 Death of watcherThread making other local read blocked
 --

 Key: HDFS-8429
 URL: https://issues.apache.org/jira/browse/HDFS-8429
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch


 In our cluster, an application is hung when doing a short circuit read of 
 local hdfs block. By looking into the log, we found the DataNode's 
 DomainSocketWatcher.watcherThread has exited with following log:
 {code}
 ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
 Thread[Thread-25,5,main] terminating on unexpected exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 The line 463 is following code snippet:
 {code}
  try {
 for (int fd : fdSet.getAndClearReadableFds()) {
   sendCallbackAndRemove(getAndClearReadableFds, entries, fdSet,
 fd);
 }
 {code}
 getAndClearReadableFds is a native method which will malloc an int array. 
 Since our memory is very tight, it looks like the malloc failed and a NULL 
 pointer is returned.
 The bad thing is that other threads then blocked in stack like this:
 {code}
 DataXceiver for client 
 unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
 operation #1] daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
 condition [0x7f09b9856000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x0007b0174808 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 IMO, we should exit the DN so that the users can know that something go  
 wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8429) The DomainSocketWatcher thread should not block other threads if it dies

2015-05-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8429:
---
Summary: The DomainSocketWatcher thread should not block other threads if 
it dies  (was: Death of watcherThread making other local read blocked)

 The DomainSocketWatcher thread should not block other threads if it dies
 

 Key: HDFS-8429
 URL: https://issues.apache.org/jira/browse/HDFS-8429
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch


 In our cluster, an application is hung when doing a short circuit read of 
 local hdfs block. By looking into the log, we found the DataNode's 
 DomainSocketWatcher.watcherThread has exited with following log:
 {code}
 ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
 Thread[Thread-25,5,main] terminating on unexpected exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 The line 463 is following code snippet:
 {code}
  try {
 for (int fd : fdSet.getAndClearReadableFds()) {
   sendCallbackAndRemove(getAndClearReadableFds, entries, fdSet,
 fd);
 }
 {code}
 getAndClearReadableFds is a native method which will malloc an int array. 
 Since our memory is very tight, it looks like the malloc failed and a NULL 
 pointer is returned.
 The bad thing is that other threads then blocked in stack like this:
 {code}
 DataXceiver for client 
 unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
 operation #1] daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
 condition [0x7f09b9856000]
java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  0x0007b0174808 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 IMO, we should exit the DN so that the users can know that something go  
 wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559707#comment-14559707
 ] 

Colin Patrick McCabe commented on HDFS-8078:


Thanks, [~nkedel].  I understand the frustration at the fact that Hadoop / HDFS 
doesn't yet support ipv6.  It seems like something that we should be supporting 
in 2015.  However, I guess most cluster operators have simply found that they 
can use ipv4 link-local addresses privately, and so the push was never there.  
It sounds like this is starting to change.

I think a feature branch would be nice because then:
* You could get non-committers to +1 your patches on the feature branch (move 
faster)
* We could design a more coherent test plan for this functionality (how will we 
know if a new change breaks ipv6 functionality?  Right now we have no idea 
because Jenkins explicitly disables ipv6.)
* We could think a little more globally about what needs to be changed.  Should 
we start passing around InetSocketAddress objects inside DatanodeID instead of 
(or in addition to) host:port strings, for example?

Do you have a patch set internally that you could post?  I especially think 
unit tests would be helpful in keeping us honest here.

 HDFS client gets errors trying to to connect to IPv6 DataNode
 -

 Key: HDFS-8078
 URL: https://issues.apache.org/jira/browse/HDFS-8078
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.6.0
Reporter: Nate Edel
Assignee: Nate Edel
  Labels: BB2015-05-TBR, ipv6
 Attachments: HDFS-8078.9.patch


 1st exception, on put:
 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
 java.lang.IllegalArgumentException: Does not contain a valid host:port 
 authority: 2401:db00:1010:70ba:face:0:8:0:50010
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
 Appears to actually stem from code in DataNodeID which assumes it's safe to 
 append together (ipaddr + : + port) -- which is OK for IPv4 and not OK for 
 IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
 requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
 Currently using InetAddress.getByName() to validate IPv6 (guava 
 InetAddresses.forString has been flaky) but could also use our own parsing. 
 (From logging this, it seems like a low-enough frequency call that the extra 
 object creation shouldn't be problematic, and for me the slight risk of 
 passing in bad input that is not actually an IPv4 or IPv6 address and thus 
 calling an external DNS lookup is outweighed by getting the address 
 normalized and avoiding rewriting parsing.)
 Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
 ---
 2nd exception (on datanode)
 15/04/13 13:18:07 ERROR datanode.DataNode: 
 dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
 operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
 /2401:db00:11:d010:face:0:2f:0:50010
 java.io.EOFException
 at java.io.DataInputStream.readShort(DataInputStream.java:315)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
 at java.lang.Thread.run(Thread.java:745)
 Which also comes as client error -get: 2401 is not an IP string literal.
 This one has existing parsing logic which needs to shift to the last colon 
 rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
 rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8479:
---

 Summary: Erasure coding: fix striping related logic in 
FSDirWriteFileOp to sync with HDFS-8421
 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2015-05-26 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559641#comment-14559641
 ] 

Ravi Prakash commented on HDFS-8344:


It also seems that hsync simply waits for the packets to be acknowledged. Which 
would imply that the data persistence guarantee we give to clients is when they 
get an ACK. Is that right?

 NameNode doesn't recover lease for files with missing blocks
 

 Key: HDFS-8344
 URL: https://issues.apache.org/jira/browse/HDFS-8344
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
 HDFS-8344.03.patch, HDFS-8344.04.patch


 I found another\(?) instance in which the lease is not recovered. This is 
 reproducible easily on a pseudo-distributed single node cluster
 # Before you start it helps if you set. This is not necessary, but simply 
 reduces how long you have to wait
 {code}
   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
 LEASE_SOFTLIMIT_PERIOD;
 {code}
 # Client starts to write a file. (could be less than 1 block, but it hflushed 
 so some of the data has landed on the datanodes) (I'm copying the client code 
 I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
 # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
 TestHadoop.jar) process after it has printed Wrote to the bufferedWriter
 # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
 only 1)
 I believe the lease should be recovered and the block should be marked 
 missing. However this is not happening. The lease is never recovered.
 The effect of this bug for us was that nodes could not be decommissioned 
 cleanly. Although we knew that the client had crashed, the Namenode never 
 released the leases (even after restarting the Namenode) (even months 
 afterwards). There are actually several other cases too where we don't 
 consider what happens if ALL the datanodes die while the file is being 
 written, but I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559838#comment-14559838
 ] 

Hadoop QA commented on HDFS-8306:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 14s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  3s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  5s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 45s | Tests failed in hadoop-hdfs. |
| | | 204m 38s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735358/HDFS-8306.debug0.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 022f49d |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11130/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11130/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11130/console |


This message was automatically generated.

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch, HDFS-8306.debug0.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8479:

Attachment: HDFS-8479-HDFS-7285.0.patch

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-26 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8322:

Attachment: HDFS-8322.001.patch

Updated the patch to add an optional configuration, to disable warnings message 
by default.

 Display warning if hadoop fs -ls is showing the local filesystem
 

 Key: HDFS-8322
 URL: https://issues.apache.org/jira/browse/HDFS-8322
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch


 Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
 -ls}}.
 This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
 the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559872#comment-14559872
 ] 

Zhe Zhang commented on HDFS-8453:
-

Thanks Walter for the review. Sorry about the confusion. 

This was my initial thought:
{quote}
LocatedBlock#offset should indicate the offset of the first byte of the block 
in the file. In a striped block group, we should properly assign this offset 
for internal blocks, so each internal block can be identified from a given 
offset.
My current plan is to keep using bg.getStartOffset() + idxInBlockGroup * 
cellSize as the start offset for data blocks. For parity blocks, use -1 * 
(bg.getStartOffset() + idxInBlockGroup * cellSize).
{quote}

This was the approach in the patch:
{quote}
Actually it's not possible to assign meaningful start offset values for all 
internal blocks, especially parity ones. Consider a block group with 1 byte of 
data. No matter how to set the start offsets for parity blocks (negative 
values, etc.), they will overlap with the next block group in the file.
So this patch takes another approach: refactor DFSInputStream with a new 
refreshLocatedBlock method when the located block is to be refreshed instead of 
calling getBlockAt at first time. Then the refresh method can be extended in 
DFSStripedInputStream with index handling.
{quote}

If it's still confusing, please ignore all comments and review the patch itself 
:)

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {code}
   void actualGetFromOneDataNode(final DNAddrPair datanode,
 ...
   LocatedBlock block = getBlockAt(blockStartOffset);
 ...
   fetchBlockAt(block.getStartOffset());
 {code}
 The {{blockStartOffset}} here is from inner block. For parity blocks, the 
 offset will overlap with the next block group, and we may end up with 
 fetching wrong block. So we have to assign a meaningful start offset for 
 internal blocks in a block group, especially for parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8479:

Status: Patch Available  (was: Open)

The patch is actually simpler than I expected. Through all EC patches we have 
changed several times the way to check whether a file is striped / erasure 
coded. This patch just applies the latest version to {{FSDirWriteFileOp}}. It 
also fixes a test failure because {{ThreadLocalRandom.current().nextLong()}} 
could generate negative block IDs (but the block info is contiguous).

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6022) Moving deadNodes from being thread local. Improving dead datanode handling in DFSClient

2015-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559892#comment-14559892
 ] 

Colin Patrick McCabe commented on HDFS-6022:


Hi [~iwasakims], feel free to take this one if you want.

 Moving deadNodes from being thread local. Improving dead datanode handling in 
 DFSClient 
 

 Key: HDFS-6022
 URL: https://issues.apache.org/jira/browse/HDFS-6022
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.0.0, 0.23.9, 0.23.10, 2.2.0, 2.3.0
Reporter: Jack Levin
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR, patch
 Attachments: HADOOP-6022.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 This patch solves an issue of deadNodes list being thread local.  deadNodes 
 list is created by DFSClient when some problems with write/reading, or 
 contacting datanode exist.  The problem is that deadNodes is not visible to 
 other DFSInputStream threads, hence every DFSInputStream ends up building its 
 own deadNodes.  This affect performance of DFSClient to a large degree 
 especially when a datanode goes completely offline (there is a tcp connect 
 delay experienced by all DFSInputStream threads affecting performance of the 
 whole cluster).
 This patch moves deadNodes to be global in DFSClient class so that as soon as 
 a single DFSInputStream thread reports a dead datanode, all other 
 DFSInputStream threads are informed, negating the need to create their own 
 independent lists (concurrent Map really). 
 Further, a global deadNodes health check manager thread (DeadNodeVerifier) is 
 created to verify all dead datanodes every 5 seconds, and remove the same 
 list as soon as it is up.  That thread under normal conditions (deadNodes 
 empty) would be sleeping.  If deadNodes is not empty, the thread will attempt 
 to open tcp connection every 5 seconds to affected datanodes.
 This patch has a test (TestDFSClientDeadNodes) that is quite simple, since 
 the deadNodes creation is not affected by the patch, we only test datanode 
 removal from deadNodes by the health check manager thread.  Test will create 
 a file in dfs minicluster, read from the same file rapidly, cause datanode to 
 restart, and test is the health check manager thread does the right thing, 
 removing the alive datanode from the global deadNodes list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8443) Document dfs.namenode.service.handler.count in hdfs-site.xml

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559207#comment-14559207
 ] 

Hadoop QA commented on HDFS-8443:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 49s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m  1s | Tests passed in hadoop-hdfs. 
|
| | | 199m 45s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735302/HDFS-8443.3.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 9a3d617 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11128/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11128/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11128/console |


This message was automatically generated.

 Document dfs.namenode.service.handler.count in hdfs-site.xml
 

 Key: HDFS-8443
 URL: https://issues.apache.org/jira/browse/HDFS-8443
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Akira AJISAKA
Assignee: J.Andreina
 Attachments: HDFS-8443.1.patch, HDFS-8443.2.patch, HDFS-8443.3.patch


 When dfs.namenode.servicerpc-address is configured, NameNode launches an 
 extra RPC server to handle requests from non-client nodes. 
 dfs.namenode.service.handler.count specifies the number of threads for the 
 server but the parameter is not documented anywhere.
 I found a mail for asking about the parameter. 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201505.mbox/%3CE0D5A619-BDEA-44D2-81EB-C32B8464133D%40gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8234) DistributedFileSystem and Globber should apply PathFilter early

2015-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559232#comment-14559232
 ] 

Hadoop QA commented on HDFS-8234:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 12s | The applied patch generated  1 
new checkstyle issues (total was 4, now 4). |
| {color:red}-1{color} | checkstyle |   3m 21s | The applied patch generated  1 
new checkstyle issues (total was 48, now 48). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  2s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 164m 11s | Tests passed in hadoop-hdfs. 
|
| | | 229m 32s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735300/HDFS-8234.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9a3d617 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/artifact/patchprocess/diffcheckstylehadoop-common.txt
 
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11127/console |


This message was automatically generated.

 DistributedFileSystem and Globber should apply PathFilter early
 ---

 Key: HDFS-8234
 URL: https://issues.apache.org/jira/browse/HDFS-8234
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Rohini Palaniswamy
Assignee: J.Andreina
  Labels: newbie
 Attachments: HDFS-8234.1.patch, HDFS-8234.2.patch


 HDFS-985 added partial listing in listStatus to avoid listing entries of 
 large directory in one go. If listStatus(Path p, PathFilter f) call is made, 
 filter is applied after fetching all the entries resulting in a big list 
 being constructed on the client side. If the 
 DistributedFileSystem.listStatusInternal() applied the PathFilter it would be 
 more efficient. So DistributedFileSystem should override listStatus(Path f, 
 PathFilter filter) and apply PathFilter early. 
 Globber.java also applies filter after calling listStatus.  It should call 
 listStatus with the PathFilter.
 {code}
 FileStatus[] children = listStatus(candidate.getPath());
.
 for (FileStatus child : children) {
   // Set the child path based on the parent path.
   child.setPath(new Path(candidate.getPath(),
   child.getPath().getName()));
   if (globFilter.accept(child.getPath())) {
 newCandidates.add(child);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8401) Memfs - a layered file system for in-memory storage in HDFS

2015-05-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560011#comment-14560011
 ] 

Arpit Agarwal commented on HDFS-8401:
-

bq. I'm not sure I see the advantage of having a separate file system, rather 
than simply putting this into HDFS. bq. Can you clarify how users would 
interact with this system?
Colin, our goal is making it easier for applications to use memory support in 
HDFS specifically and Hadoop Compatible File Systems in general.
# Allow using memory features without calling HDFS-specific APIs. This also 
isolates applications from evolving APIs. Applications currently use shims and 
reflection tricks to work with different versions of HDFS. 
# Once applications start using memfs someone could write a memfs layer over 
another HCFS e.g. Amazon S3. 

memfs itself will not cache any data when used with hdfs. wrt interaction, 
applications can choose to use {{memfs://}} paths instead of {{hdfs://}} paths 
for data targeted to memory.

bq. How does this relate to DDM?
There is no immediate plan to introduce a discardable namespace.

 Memfs - a layered file system for in-memory storage in HDFS
 ---

 Key: HDFS-8401
 URL: https://issues.apache.org/jira/browse/HDFS-8401
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal

 We propose creating a layered filesystem that can provide in-memory storage 
 using existing features within HDFS. memfs will use lazy persist writes 
 introduced by HDFS-6581. For reads, memfs can use the Centralized Cache 
 Management feature introduced in HDFS-4949 to load hot data to memory.
 Paths in memfs and hdfs will correspond 1:1 so memfs will require no 
 additional metadata and it can be implemented entirely as a client-side 
 library.
 The advantage of a layered file system is that it requires little or no 
 changes to existing applications. e.g. Applications can use something like 
 {{memfs://}} instead of {{hdfs://}} for files targeted to memory storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8480:

Summary: Fix performance and timeout issues in HDFS-7929: use hard-links 
instead of copying edit logs  (was: Fixing performance and timeout issues in 
HDFS-7929: use hard-links instead of copying edit logs)

 Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
 copying edit logs
 

 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
 {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
 hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560097#comment-14560097
 ] 

Jing Zhao commented on HDFS-8479:
-

Looks like the failure is caused by HADOOP-11847. I will commit the patch.

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8476:
-
Labels: QBST  (was: )

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST

 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8409) HDFS client RPC call throws java.lang.IllegalStateException

2015-05-26 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-8409:
--
Attachment: HDFS-8409.003.patch

Fix whitespace. 
The failed test is not related to my change and it passes locally.

 HDFS client RPC call throws java.lang.IllegalStateException
 -

 Key: HDFS-8409
 URL: https://issues.apache.org/jira/browse/HDFS-8409
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Juan Yu
Assignee: Juan Yu
 Attachments: HDFS-8409.001.patch, HDFS-8409.002.patch, 
 HDFS-8409.003.patch


 When the HDFS client RPC calls need to retry, it sometimes throws 
 java.lang.IllegalStateException and retry is aborted and cause the client 
 call will fail.
 {code}
 Caused by: java.lang.IllegalStateException
   at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:129)
   at org.apache.hadoop.ipc.Client.setCallIdAndRetryCount(Client.java:116)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:99)
   at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
 {code}
 Here is the check that throws exception
 {code}
   public static void setCallIdAndRetryCount(int cid, int rc) {
   ...
   Preconditions.checkState(callId.get() == null);
   }
 {code}
 The RetryInvocationHandler tries to call it with not null callId and causes 
 exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560068#comment-14560068
 ] 

Zhe Zhang commented on HDFS-8479:
-

Thanks Jing for the review!

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8479:

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks again Zhe for the contribution!

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560114#comment-14560114
 ] 

Jing Zhao commented on HDFS-8479:
-

Maybe we can open a new jira to include this change and also fix all the 
workaround for decoding?

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7984) webhdfs:// needs to support provided delegation tokens

2015-05-26 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560039#comment-14560039
 ] 

Anthony Hsu commented on HDFS-7984:
---

Actually, it seems that WebHdfsFileSystem *does* use the tokens in 
{{HADOOP_TOKEN_FILE_LOCATION}} (under the hood, it's all handled by 
{{UserGroupInformation}}). My mistake earlier was that I was fetching 
delegation tokens for {{hdfs://}} rather than {{webhdfs://}}. Once I fixed 
this, setting {{HADOOP_TOKEN_FILE_LOCATION}} worked as expected.

 webhdfs:// needs to support provided delegation tokens
 --

 Key: HDFS-7984
 URL: https://issues.apache.org/jira/browse/HDFS-7984
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker

 When using the webhdfs:// filesystem (especially from distcp), we need the 
 ability to inject a delegation token rather than webhdfs initialize its own.  
 This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-26 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8306:

Attachment: HDFS-8306.debug1.patch

Change {{PrintStream}} to utf8 as well.

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch, HDFS-8306.debug0.patch, HDFS-8306.debug1.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560103#comment-14560103
 ] 

Zhe Zhang commented on HDFS-8479:
-

Thanks Jing. I was about the include the following change:
{code}
  /**
   * Decode based on the given input buffers and schema
   */
  public static void decodeAndFillBuffer(final byte[][] decodeInputs, byte[] 
buf,
  AlignedStripe alignedStripe, int dataBlkNum, int parityBlkNum) {
int[] decodeIndices = new int[parityBlkNum];
int pos = 0;
for (int i = 0; i  alignedStripe.chunks.length; i++) {
  if (alignedStripe.chunks[i].state != StripingChunk.FETCHED 
  alignedStripe.chunks[i].state != StripingChunk.ALLZERO) {
decodeIndices[pos++] = i;
decodeInputs[i] = null;
  }
}
{code}

Basically,  HADOOP-11847 requires us to leave to-be-decoded slots as null.

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8481) Erasure coding: remove workarounds for codec calculation

2015-05-26 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8481:
---

 Summary: Erasure coding: remove workarounds for codec calculation
 Key: HDFS-8481
 URL: https://issues.apache.org/jira/browse/HDFS-8481
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang


After HADOOP-11847 and related fixes, we should be able to properly calculate 
decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560067#comment-14560067
 ] 

Jing Zhao commented on HDFS-8479:
-

Thanks for the merge, Zhe! The patch looks pretty good to me. +1. I will commit 
it shortly.

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8480) Fixing performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-05-26 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8480:
---

 Summary: Fixing performance and timeout issues in HDFS-7929: use 
hard-links instead of copying edit logs
 Key: HDFS-8480
 URL: https://issues.apache.org/jira/browse/HDFS-8480
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang


HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
{{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560084#comment-14560084
 ] 

Jing Zhao commented on HDFS-8479:
-

BTW, while running tests after the merge, I got this test failure:
{code}
Running org.apache.hadoop.hdfs.TestDFSStripedInputStream
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 24.727 sec  
FAILURE! - in org.apache.hadoop.hdfs.TestDFSStripedInputStream
testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStream)  Time 
elapsed: 3.329 sec   ERROR!
org.apache.hadoop.HadoopIllegalArgumentException: Inputs not fully 
corresponding to erasedIndexes in null places
at 
org.apache.hadoop.io.erasurecode.rawcoder.RSRawDecoder.doDecode(RSRawDecoder.java:132)
at 
org.apache.hadoop.io.erasurecode.rawcoder.AbstractRawErasureDecoder.decode(AbstractRawErasureDecoder.java:113)
at 
org.apache.hadoop.hdfs.util.StripedBlockUtil.decodeAndFillBuffer(StripedBlockUtil.java:291)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.fetchOneStripe(DFSStripedInputStream.java:594)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.fetchBlockByteRange(DFSStripedInputStream.java:520)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1472)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1435)
at 
org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:242)
{code}

Zhe, Do you want to include the fix here or do it in a separate jira?

 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8479) Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with HDFS-8421

2015-05-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560120#comment-14560120
 ] 

Zhe Zhang commented on HDFS-8479:
-

Good idea Jing. I just filed HDFS-8481

---
Zhe Zhang




 Erasure coding: fix striping related logic in FSDirWriteFileOp to sync with 
 HDFS-8421
 -

 Key: HDFS-8479
 URL: https://issues.apache.org/jira/browse/HDFS-8479
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-8479-HDFS-7285.0.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560140#comment-14560140
 ] 

Xiaoyu Yao commented on HDFS-8476:
--

[~tongshiquan], thanks for reporting the issue. I tried the scenario with a 
single node cluster with a 150MB file and slightly smaller quota size 400MB but 
cannot repro it. 

{code}
$ hdfs dfs -mkdir /HOT
$ hdfs dfs -put hadoop-3.0.0-SNAPSHOT.tar.gz /HOT/FILE1
$ hdfs dfsadmin -setSpaceQuota 400M -storageType DISK /HOT
$ hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
Set storage policy HOT on /HOT
$ hdfs dfs -count -q -h -v -t DISK /HOT
   DISK_QUOTAREM_DISK_QUOTA PATHNAME
400 M   256.6 M /HOT
$ hdfs dfs -put hadoop-3.0.0-SNAPSHOT.tar.gz /HOT/FILE2
$ hdfs dfs -count -q -h -v -t DISK /HOT
   DISK_QUOTAREM_DISK_QUOTA PATHNAME
400 M   113.3 M /HOT
$ hdfs dfs -put hadoop-3.0.0-SNAPSHOT.tar.gz /HOT/FILE3
put: Quota by storage type : DISK on path : /HOT is exceeded. quota = 400 MB 
but space consumed = 414.71 MB
{code}

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST

 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8306) Generate ACL and Xattr outputs in OIV XML outputs

2015-05-26 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8306:

Attachment: HDFS-8306.debug0.patch

Still can not reproduce the error on both OSX and Linux machines.  Uploaded a 
debug patch to print more debug information. 

 Generate ACL and Xattr outputs in OIV XML outputs
 -

 Key: HDFS-8306
 URL: https://issues.apache.org/jira/browse/HDFS-8306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HDFS-8306.000.patch, HDFS-8306.001.patch, 
 HDFS-8306.002.patch, HDFS-8306.003.patch, HDFS-8306.004.patch, 
 HDFS-8306.005.patch, HDFS-8306.debug0.patch


 Currently, in the {{hdfs oiv}} XML outputs, not all fields of fsimage are 
 outputs. It makes inspecting {{fsimage}} from XML outputs less practical. 
 Also it prevents recovering a fsimage from XML file.
 This JIRA is adding ACL and XAttrs in the XML outputs as the first step to 
 achieve the goal described in HDFS-8061.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-26 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559469#comment-14559469
 ] 

Lei (Eddy) Xu commented on HDFS-8322:
-

Hey, [~aw]. We'd still like to have this warnings capability, as it is useful 
for some customers. 
Would you give a +0 if we display a warning behind an optional configuration, 
which disables the warnings by default, as [~andrew.wang] suggested.

Looking forward to hear from you.


 Display warning if hadoop fs -ls is showing the local filesystem
 

 Key: HDFS-8322
 URL: https://issues.apache.org/jira/browse/HDFS-8322
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-8322.000.patch


 Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
 -ls}}.
 This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
 the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)