[jira] [Created] (HDFS-6499) can't tell why FileJournalManager's call to java.io.File.renameTo() fails

2014-06-06 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-6499:
---

 Summary: can't tell why FileJournalManager's call to 
java.io.File.renameTo() fails
 Key: HDFS-6499
 URL: https://issues.apache.org/jira/browse/HDFS-6499
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


java.io.File's method renameTo()  returns boolean (true for success and false 
for failure). If any call to this method failed, the caller can't tell why it 
failed.

Filing this jira to address this issue by using hadoop nativeio alternative.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6498) Support squash and range in NFS static user id mapping

2014-06-06 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6498:


 Summary: Support squash and range in NFS static user id mapping 
 Key: HDFS-6498
 URL: https://issues.apache.org/jira/browse/HDFS-6498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li


HDFS-6435 adds static user group name id mapping. The mapping is a one to one 
mapping. 

What makes this feature easy to use is to support squash and range based 
mapping as that in traditional NFS configuration (e.g., 
http://manpages.ubuntu.com/manpages/hardy/man5/exports.5.html)
{noformat}
# Mapping for client foobar:
#remote local
uid  0-99   -   # squash these
uid  100-5001000# map 100-500 to 1000-1500
gid  0-49   -   # squash these
gid  50-100 700 # map 50-100 to 700-750
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6497) Make TestAvailableSpaceVolumeChoosingPolicy deterministic

2014-06-06 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6497:
--

 Summary: Make TestAvailableSpaceVolumeChoosingPolicy deterministic
 Key: HDFS-6497
 URL: https://issues.apache.org/jira/browse/HDFS-6497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6497.001.patch

We should make TestAvailableSpaceVolumeChoosingPolicy deterministic to avoid 
random failures.  We can do this by setting the seed for the random number 
generator explicitly in the test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: hedged read bug

2014-06-06 Thread Chris Nauroth
Hello Lei,

There is a known bug in 2.4.0 that can cause hedged reads to hang.  I fixed
it in HDFS-6231:

https://issues.apache.org/jira/browse/HDFS-6231

This patch will be included in the forthcoming 2.4.1 release.  I'm curious
to see if applying this patch fixes the problem for you.  Can you try it
and let us know?  Thank you!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 5, 2014 at 8:34 PM, lei liu  wrote:

> I use hadoop2.4.
>
> When I use "hedged read", If there is only one live datanode, the reading
> from  the datanode throw TimeoutException and ChecksumException., the
> Client will infinite wait.
>
> Example below test case:
>   @Test
>   public void testException() throws IOException, InterruptedException,
> ExecutionException {
> Configuration conf = new Configuration();
> int numHedgedReadPoolThreads = 5;
> final int hedgedReadTimeoutMillis = 50;
> conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE,
> numHedgedReadPoolThreads);
> conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS,
>   hedgedReadTimeoutMillis);
> // Set up the InjectionHandler
> DFSClientFaultInjector.instance =
> Mockito.mock(DFSClientFaultInjector.class);
> DFSClientFaultInjector injector = DFSClientFaultInjector.instance;
> // make preads ChecksumException
> Mockito.doAnswer(new Answer() {
>   @Override
>   public Void answer(InvocationOnMock invocation) throws Throwable {
> if(true) {
>   Thread.sleep(hedgedReadTimeoutMillis + 10);
>   throw new ChecksumException("test", 100);
> }
> return null;
>   }
> }*).when(injector).fetchFromDatanodeException();*
>
> MiniDFSCluster cluster = new
> MiniDFSCluster.Builder(conf).numDataNodes(3).format(true).build();
> DistributedFileSystem fileSys = cluster.getFileSystem();
> DFSClient dfsClient = fileSys.getClient();
> DFSHedgedReadMetrics metrics = dfsClient.getHedgedReadMetrics();
>
> try {
>   Path file = new Path("/hedgedReadException.dat");
>   FSDataOutputStream  output = fileSys.create(file,(short)1);
>   byte[] data = new byte[64 * 1024];
>   output.write(data);
>   output.flush();
>   output.write(data);
>   output.flush();
>   output.write(data);
>   output.flush();
>   output.close();
>   byte[] buffer = new byte[64 * 1024];
>   FSDataInputStream  input = fileSys.open(file);
>   input.read(0, buffer, 0, 1024);
>   input.close();
>   assertTrue(metrics.getHedgedReadOps() == 1);
>   assertTrue(metrics.getHedgedReadWins() == 1);
> } finally {
>   fileSys.close();
>   cluster.shutdown();
>   Mockito.reset(injector);
> }
>   }
>
>
> *The code of actualGetFromOneDataNode() method call
> **fetchFromDatanodeException()
> method as below:*
>   try {
> *DFSClientFaultInjector.get().fetchFromDatanodeException();*
> Token blockToken = block.getBlockToken();
> int len = (int) (end - start + 1);
> reader = new BlockReaderFactory(dfsClient.getConf()).
> setInetSocketAddress(targetAddr).
> setRemotePeerFactory(dfsClient).
> setDatanodeInfo(chosenNode).
> setFileName(src).
> setBlock(block.getBlock()).
> setBlockToken(blockToken).
> setStartOffset(start).
> setVerifyChecksum(verifyChecksum).
> setClientName(dfsClient.clientName).
> setLength(len).
> setCachingStrategy(curCachingStrategy).
> setAllowShortCircuitLocalReads(allowShortCircuitLocalReads).
> setClientCacheContext(dfsClient.getClientContext()).
> setUserGroupInformation(dfsClient.ugi).
> setConfiguration(dfsClient.getConfiguration()).
> build();
> int nread = reader.readAll(buf, offset, len);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> return;
>   } catch (ChecksumException e) {
> String msg = "fetchBlockByteRange(). Got a checksum exception for "
> + src + " at " + block.getBlock() + ":" + e.getPos() + " from "
> + chosenNode;
> DFSClient.LOG.warn(msg);
> // we want to remember what we have tried
> addIntoCorruptedBlockMap(block.getBlock(), chosenNode,
> corruptedBlockMap);
> addToDeadNodes(chosenNode);
> throw new IOException(msg);
>   }
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distributio

Build failed in Jenkins: Hadoop-Hdfs-trunk #1766

2014-06-06 Thread Apache Jenkins Server
See 

Changes:

[umamahesh] HDFS-6464. Support multiple xattr.name parameters for WebHDFS 
getXAttrs. Contributed by Yi Liu.

[cmccabe] HDFS-6369. Document that BlockReader#available() can return more 
bytes than are remaining in the block (Ted Yu via Colin Patrick McCabe)

[junping_du] YARN-1977. Add tests on getApplicationRequest with filtering start 
time range. (Contributed by Junping Du)

--
[...truncated 12844 lines...]
Running org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.319 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
Running org.apache.hadoop.hdfs.server.namenode.TestNameCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameCache
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.4 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestXAttrConfigFlag
Running org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.417 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
Running org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.885 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.953 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.478 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.631 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
Running org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.34 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.623 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.753 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.767 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.308 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.135 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.55 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.628 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.784 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.924 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.614 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Running org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.691 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA
Running org.apache.hadoop.hdfs.server.na

Hadoop-Hdfs-trunk - Build # 1766 - Still Failing

2014-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1766/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 13037 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[2:18:03.047s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [4.773s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:18:09.772s
[INFO] Finished at: Fri Jun 06 13:55:30 UTC 2014
[INFO] Final Memory: 31M/331M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-6369
Updating YARN-1977
Updating HDFS-6464
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService.testBPInitErrorHandling

Error Message:
expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService.testBPInitErrorHandling(TestBPOfferService.java:334)




Re: Compatibility Yarn 2.4 with HDFS 2.0

2014-06-06 Thread 张鹏
Also I tested using hdfs client 2.0 access hdfs server 2.4, this also got
no compatibility:
"Incorrect header or version mismatch from 10.2.201.245:59310 got version
7 expected version 9"

Does this mean if we update our production cluster from 2.0 to 2.4, all
clients must be re-build?

Any suggestions on updating like this?

--
Thanks,
Peng


在 6/6/14 5:06 PM, "张鹏"  写入:

>You mean YARN will call some interfaces of FileSystem that not existed in
>2.0?
>Because I think YARN and MR depend on FileSystem interface of
>hadoop-common.
>
>I want to hack as below:
>
> ---> HDFS 2.0 > Hadoop-common 2.0
>   /
>YARN and MR
>   \
> --->Hadoop-common 2.4
>
>So Yarn and MR can use new interfaces added in hadoop-common 2.4, while
>hdfs client uses old IPC implementation.
>
>--
>Thanks,
>Peng
>
>在 6/6/14 4:45 PM, "Yu Azuryy"  写入:
>
>>Even if you done this, it's unstable, because Yarn2.4 added some IPC
>>interfaces. then HDFS MUST has related interfaces.
>>
>>
>>On Fri, Jun 6, 2014 at 4:22 PM, 张鹏  wrote:
>>
>>> Hi all,
>>>
>>> I want to upgrade to Yarn 2.4 only. But when it access Hdfs, IPC Server
>>> version is mismached.
>>>
>>> HDFS (IPC v7) cant respond to YARN (IPC v9) with below error log:
>>> WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch
>>> from 127.0.0.1:47957 got version 9 expected version 7
>>>
>>> I want to know whether it is possible to do this?
>>>
>>> Maybe I can change pom to depend on hdfs 2.0, and use maven shade to
>>>make
>>> hadoop common(2.4 and 2.0) work together
>>>
>>> Any suggestions?
>>>
>>> --
>>> Thanks,
>>> Peng
>>>
>>>
>



Re: Compatibility Yarn 2.4 with HDFS 2.0

2014-06-06 Thread 张鹏
You mean YARN will call some interfaces of FileSystem that not existed in
2.0?
Because I think YARN and MR depend on FileSystem interface of
hadoop-common.

I want to hack as below:

 ---> HDFS 2.0 > Hadoop-common 2.0
   /
YARN and MR
   \
 --->Hadoop-common 2.4

So Yarn and MR can use new interfaces added in hadoop-common 2.4, while
hdfs client uses old IPC implementation.

--
Thanks,
Peng

在 6/6/14 4:45 PM, "Yu Azuryy"  写入:

>Even if you done this, it's unstable, because Yarn2.4 added some IPC
>interfaces. then HDFS MUST has related interfaces.
>
>
>On Fri, Jun 6, 2014 at 4:22 PM, 张鹏  wrote:
>
>> Hi all,
>>
>> I want to upgrade to Yarn 2.4 only. But when it access Hdfs, IPC Server
>> version is mismached.
>>
>> HDFS (IPC v7) cant respond to YARN (IPC v9) with below error log:
>> WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch
>> from 127.0.0.1:47957 got version 9 expected version 7
>>
>> I want to know whether it is possible to do this?
>>
>> Maybe I can change pom to depend on hdfs 2.0, and use maven shade to
>>make
>> hadoop common(2.4 and 2.0) work together
>>
>> Any suggestions?
>>
>> --
>> Thanks,
>> Peng
>>
>>



[jira] [Created] (HDFS-6496) WebHDFS cannot open file

2014-06-06 Thread Fengdong Yu (JIRA)
Fengdong Yu created HDFS-6496:
-

 Summary: WebHDFS cannot open file
 Key: HDFS-6496
 URL: https://issues.apache.org/jira/browse/HDFS-6496
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Fengdong Yu


WebHDFS cannot open the file on the name node web UI. I attched screen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6495) In some case, the hedged read will lead to client infinite wait.

2014-06-06 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu resolved HDFS-6495.
---

Resolution: Duplicate

Duplicate with HDFS-6494

> In some case, the  hedged read will lead to client  infinite wait.
> --
>
> Key: HDFS-6495
> URL: https://issues.apache.org/jira/browse/HDFS-6495
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0
>Reporter: LiuLei
>
> When I use "hedged read", If there is only one live datanode, the reading 
> from  the datanode throw TimeoutException and ChecksumException., the Client 
> will infinite wait.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Compatibility Yarn 2.4 with HDFS 2.0

2014-06-06 Thread Yu Azuryy
Even if you done this, it's unstable, because Yarn2.4 added some IPC
interfaces. then HDFS MUST has related interfaces.


On Fri, Jun 6, 2014 at 4:22 PM, 张鹏  wrote:

> Hi all,
>
> I want to upgrade to Yarn 2.4 only. But when it access Hdfs, IPC Server
> version is mismached.
>
> HDFS (IPC v7) cant respond to YARN (IPC v9) with below error log:
> WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch
> from 127.0.0.1:47957 got version 9 expected version 7
>
> I want to know whether it is possible to do this?
>
> Maybe I can change pom to depend on hdfs 2.0, and use maven shade to make
> hadoop common(2.4 and 2.0) work together
>
> Any suggestions?
>
> --
> Thanks,
> Peng
>
>


[jira] [Created] (HDFS-6495) In some case, the hedged read will lead to client infinite wait.

2014-06-06 Thread LiuLei (JIRA)
LiuLei created HDFS-6495:


 Summary: In some case, the  hedged read will lead to client  
infinite wait.
 Key: HDFS-6495
 URL: https://issues.apache.org/jira/browse/HDFS-6495
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: LiuLei


When I use "hedged read", If there is only one live datanode, the reading from  
the datanode throw TimeoutException and ChecksumException., the Client will 
infinite wait.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6494) In some case, the hedged read will lead to client infinite wait.

2014-06-06 Thread LiuLei (JIRA)
LiuLei created HDFS-6494:


 Summary: In some case, the  hedged read will lead to client  
infinite wait.
 Key: HDFS-6494
 URL: https://issues.apache.org/jira/browse/HDFS-6494
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: LiuLei


When I use "hedged read", If there is only one live datanode, the reading from  
the datanode throw TimeoutException and ChecksumException., the Client will 
infinite wait.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Compatibility Yarn 2.4 with HDFS 2.0

2014-06-06 Thread 张鹏
Hi all,

I want to upgrade to Yarn 2.4 only. But when it access Hdfs, IPC Server version 
is mismached.

HDFS (IPC v7) cant respond to YARN (IPC v9) with below error log:
WARN org.apache.hadoop.ipc.Server: Incorrect header or version mismatch from 
127.0.0.1:47957 got version 9 expected version 7

I want to know whether it is possible to do this?

Maybe I can change pom to depend on hdfs 2.0, and use maven shade to make 
hadoop common(2.4 and 2.0) work together

Any suggestions?

--
Thanks,
Peng