[jira] [Created] (HDFS-7315) DFSTestUtil.readFileBuffer opens extra FSDataInputStream

2014-10-30 Thread Plamen Jeliazkov (JIRA)
Plamen Jeliazkov created HDFS-7315:
--

 Summary: DFSTestUtil.readFileBuffer opens extra FSDataInputStream
 Key: HDFS-7315
 URL: https://issues.apache.org/jira/browse/HDFS-7315
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Plamen Jeliazkov
Assignee: Plamen Jeliazkov
Priority: Trivial


DFSTestUtil.readFileBuffer() calls FileSystem.open() twice.
Once just under the try statement, and once inside the IOUtils.copyBytes() call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6963) Test partial failures when loading / removing volumes.

2014-10-30 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-6963.
-
Resolution: Duplicate

> Test partial failures when loading / removing volumes.
> --
>
> Key: HDFS-6963
> URL: https://issues.apache.org/jira/browse/HDFS-6963
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> Test the cases that there are partial failures during loading new volumes. 
> The expected behavior should be trying the best efforts to successfully load 
> all good volumes, and then report the failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7314) Aborted DFSClient's impact on long running service like YARN

2014-10-30 Thread Ming Ma (JIRA)
Ming Ma created HDFS-7314:
-

 Summary: Aborted DFSClient's impact on long running service like 
YARN
 Key: HDFS-7314
 URL: https://issues.apache.org/jira/browse/HDFS-7314
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


It happened in YARN nodemanger scenario. But it could happen to any long 
running service that use cached instance of DistrbutedFileSystem.

1. Active NN is under heavy load. So it became unavailable for 10 minutes; any 
DFSClient request will get ConnectTimeoutException.

2. YARN nodemanager use DFSClient for certain write operation such as log 
aggregator or shared cache in YARN-1492. DFSClient used by YARN NM's renewLease 
RPC got ConnectTimeoutException.

{noformat}
2014-10-29 01:36:19,559 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
renew lease for [DFSClient_NONMAPREDUCE_-550838118_1] for 372 seconds.  
Aborting ...
{noformat}


3. After DFSClient is in Aborted state, YARN NM can't use that cached instance 
of DistributedFileSystem.

{noformat}
2014-10-29 20:26:23,991 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Failed to download rsrc...
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:727)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1780)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:237)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:340)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:57)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}



We can make YARN or DFSClient more tolerant to temporary NN unavailability. 
Given the callstack is YARN -> DistributedFileSystem -> DFSClient, this can be 
addressed at different layers.

* YARN closes the DistributedFileSystem object when it receives some well 
defined exception. Then the next HDFS call will create a new instance of 
DistributedFileSystem. We have to fix all the places in YARN. Plus other HDFS 
applications need to address this as well.

* DistributedFileSystem detects Aborted DFSClient and create a new instance of 
DFSClient. We will need to fix all the places DistributedFileSystem calls 
DFSClient.

* After DFSClient gets into Aborted state, it doesn't have to reject all 
requests , instead it can retry. If NN is available again it can transition to 
healthy state.

Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1362) Provide volume management functionality for DataNode

2014-10-30 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-1362.
-
   Resolution: Fixed
Fix Version/s: 2.7.0

The hot swap drive feature is completed. Thanks for the reviews from [~atm] and 
[~cmccabe]!

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 0.23.0
>Reporter: Wang Xu
>Assignee: Wang Xu
> Fix For: 2.7.0
>
> Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
> HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
> HDFS-1362.7.patch, HDFS-1362.8.patch, HDFS-1362.txt, 
> Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7313) Support optional configuration of AES cipher suite on DataTransferProtocol.

2014-10-30 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-7313:
---

 Summary: Support optional configuration of AES cipher suite on 
DataTransferProtocol.
 Key: HDFS-7313
 URL: https://issues.apache.org/jira/browse/HDFS-7313
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs-client, security
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HDFS-6606 introduced use of AES for encryption of DataTransferProtocol.  This 
issue proposes introduction of a configuration property for administrators to 
control whether or not AES is used or the existing support for 3DES and RC4 is 
used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7312) Update DistCp v1 to optionally not use tmp location

2014-10-30 Thread Joseph Prosser (JIRA)
Joseph Prosser created HDFS-7312:


 Summary: Update DistCp v1 to optionally not use tmp location
 Key: HDFS-7312
 URL: https://issues.apache.org/jira/browse/HDFS-7312
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.5.1
Reporter: Joseph Prosser
Assignee: Joseph Prosser
Priority: Minor


DistCp v1 currently copies files to a tmp location and then renames that to the 
specified destination.  This can cause performance issues on filesystems such 
as S3.  A -skiptmp flag will be added to bypass this step and copy directly to 
the destination.  This feature mirrors a similar one added to HBase 
ExportSnapshot [HBASE-9|https://issues.apache.org/jira/browse/HBASE-9]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7311) TestLeaseRecovery2 sometimes fails in trunk

2014-10-30 Thread Ted Yu (JIRA)
Ted Yu created HDFS-7311:


 Summary: TestLeaseRecovery2 sometimes fails in trunk
 Key: HDFS-7311
 URL: https://issues.apache.org/jira/browse/HDFS-7311
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1917/ :
{code}
REGRESSION:  org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery

Error Message:
Call From asf909.gq1.ygridcore.net/67.195.81.153 to localhost:55061 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Stack Trace:
java.net.ConnectException: Call From asf909.gq1.ygridcore.net/67.195.81.153 to 
localhost:55061 failed on connection exception: java.net.ConnectException: 
Connection refused; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy19.create(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
at com.sun.proxy.$Proxy20.create(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1694)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1654)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1579)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecovery(TestLeaseRecovery2.java:276)


FAILED:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.org.apache.hadoop.hdfs.TestLeaseRecovery2

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1709)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1696)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.tearDown(TestLeaseRecovery2.java:105)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #1917

2014-10-30 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] MAPREDUCE-6142. Fixed test failures in TestJobHistoryEventHandler and 
TestMRTimelineEventHandling. Contributed by Zhijie Shen.

[kasha] YARN-2742. FairSchedulerConfiguration should allow extra spaces between 
value and unit. (Wei Yan via kasha)

[kihwal] MAPREDUCE-6022. map_input_file is missing from streaming job

[brandonli] HADOOP-11195. Move Id-Name mapping in NFS to the hadoop-common area 
for better maintenance. Contributed by Yongjun Zhang

[cnauroth] HADOOP-11068. Match hadoop.auth cookie format to jetty output. 
Contributed by Gregory Chanan.

[zjshen] YARN-2769. Fixed the problem that timeline domain is not set in 
distributed shell AM when using shell_command on Windows. Contributed by Varun 
Vasudev.

[cmccabe] HADOOP-11186: documentation should talk about 
hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes 
(cmccabe)

[cmccabe] HDFS-7287. The OfflineImageViewer (OIV) can output invalid XML 
depending on the filename (Ravi Prakash via Colin P. McCabe)

[kihwal] HDFS-7300. HDFS-7300. The getMaxNodesPerRack() method in

[jing9] HDFS-7305. NPE seen in wbhdfs FS while running SLive. Contributed by 
Jing Zhao.

[wheat9] HADOOP-11247. Fix a couple javac warnings in NFS. Contributed by 
Brandon Li.

[yliu] HADOOP-11216. Improve Openssl library finding. (cmccabe via yliu)

[shv] HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.

[kasha] YARN-2712. TestWorkPreservingRMRestart: Augment FS tests with queue and 
headroom checks. (Tsuyoshi Ozawa via kasha)

--
[...truncated 6142 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.384 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.228 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.621 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.592 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.587 sec - in 
org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.795 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.356 sec - in 
org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.006 sec - in 
org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.967 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.175 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 380.674 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.924 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.527 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.861 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.985 sec - 
in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.732 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.533 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.072 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0,

Hadoop-Hdfs-trunk - Build # 1917 - Failure

2014-10-30 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1917/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6335 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:24 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.196 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:24 h
[INFO] Finished at: 2014-10-30T13:59:42+00:00
[INFO] Final Memory: 65M/829M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7300
Updating MAPREDUCE-6142
Updating YARN-2769
Updating HADOOP-11216
Updating HDFS-7305
Updating HADOOP-11068
Updating HADOOP-11247
Updating YARN-2742
Updating HADOOP-11186
Updating HDFS-7263
Updating YARN-2712
Updating HADOOP-11195
Updating MAPREDUCE-6022
Updating HDFS-7287
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:624)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1252)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:357)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1267)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1676)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:816)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1758)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1809)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2(TestLeaseRecovery2.java:427)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
org.j

[jira] [Created] (HDFS-7310) Mover can give first priority to local DN if it has target storage type available in local DN

2014-10-30 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-7310:
-

 Summary: Mover can give first priority to local DN if it has 
target storage type available in local DN
 Key: HDFS-7310
 URL: https://issues.apache.org/jira/browse/HDFS-7310
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Affects Versions: 3.0.0
Reporter: Uma Maheswara Rao G
Assignee: Vinayakumar B


Currently Mover logic may move blocks to any DN which had target storage type. 
But if the src DN has target storage type then mover can give highest priority 
to local DN. If local DN does not contains target storage type, then it can 
assign to any DN as the current logic does.
  This is a thought, have not go through the code fully yet.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)