Re: Next releases

2013-11-08 Thread Jun Ping Du
Hi Arun,
   Thanks for working out this list which looks great to me. In addition, I 
would like to add an item: YARN-291 to 2.3 release which enhance YARN's 
resource elasticity in cloud scenario and can benefit other scenarios i.e. 
graceful NM decommission (YARN-914), non job/app regression (or maintenance 
model) in NM rolling upgrade (YARN-671), etc. With great help from Luke, Bikas 
and Vinod, we already get the first and the most important work (YARN-311) in. 
Now, I am working on the left parts include: interfaces (RPC, CLI, REST, etc.) 
and a few enhancements (persistent, supporting different policies, etc.) and be 
optimistic on completing most of work by the end of 2013. Would you help to 
embrace it in if we can make it on time? :)

Thanks,

Junping

- Original Message -
From: "Arun C Murthy" 
To: common-...@hadoop.apache.org, hdfs-dev@hadoop.apache.org, 
yarn-...@hadoop.apache.org, mapreduce-...@hadoop.apache.org
Sent: Friday, November 8, 2013 10:42:36 AM
Subject: Next releases

Gang,

 Thinking through the next couple of releases here, appreciate f/b.

 # hadoop-2.2.1

 I was looking through commit logs and there is a *lot* of content here (81 
commits as on 11/7). Some are features/improvements and some are fixes - it's 
really hard to distinguish what is important and what isn't.

 I propose we start with a blank slate (i.e. blow away branch-2.2 and start 
fresh from a copy of branch-2.2.0)  and then be very careful and meticulous 
about including only *blocker* fixes in branch-2.2. So, most of the content 
here comes via the next minor release (i.e. hadoop-2.3)

 In future, we continue to be *very* parsimonious about what gets into a patch 
release (major.minor.patch) - in general, these should be only *blocker* fixes 
or key operational issues.

 # hadoop-2.3
 
 I'd like to propose the following features for YARN/MR to make it into 
hadoop-2.3 and punt the rest to hadoop-2.4 and beyond:
 * Application History Server - This is happening in  a branch and is close; 
with it we can provide a reasonable experience for new frameworks being built 
on top of YARN.
 * Bug-fixes in RM Restart
 * Minimal support for long-running applications (e.g. security) via YARN-896
 * RM Fail-over via ZKFC
 * Anything else?

 HDFS???

 Overall, I feel like we have a decent chance of rolling hadoop-2.3 by the end 
of the year.

 Thoughts?

thanks,
Arun
 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You


[jira] [Created] (HDFS-5480) Update Balancer for HDFS-2832

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-5480:


 Summary: Update Balancer for HDFS-2832
 Key: HDFS-5480
 URL: https://issues.apache.org/jira/browse/HDFS-5480
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


Block location type is changed from datanode to datanode storage.  Balancer 
needs to handle it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Next releases

2013-11-08 Thread Steve Loughran
On 8 November 2013 02:42, Arun C Murthy  wrote:

> Gang,
>
>  Thinking through the next couple of releases here, appreciate f/b.
>
>  # hadoop-2.2.1
>
>  I was looking through commit logs and there is a *lot* of content here
> (81 commits as on 11/7). Some are features/improvements and some are fixes
> - it's really hard to distinguish what is important and what isn't.
>
>  I propose we start with a blank slate (i.e. blow away branch-2.2 and
> start fresh from a copy of branch-2.2.0)  and then be very careful and
> meticulous about including only *blocker* fixes in branch-2.2. So, most of
> the content here comes via the next minor release (i.e. hadoop-2.3)
>
>  In future, we continue to be *very* parsimonious about what gets into a
> patch release (major.minor.patch) - in general, these should be only
> *blocker* fixes or key operational issues.
>

+1


>
>  # hadoop-2.3
>
>  I'd like to propose the following features for YARN/MR to make it into
> hadoop-2.3 and punt the rest to hadoop-2.4 and beyond:
>  * Application History Server - This is happening in  a branch and is
> close; with it we can provide a reasonable experience for new frameworks
> being built on top of YARN.
>  * Bug-fixes in RM Restart
>  * Minimal support for long-running applications (e.g. security) via
> YARN-896
>

+1 -the complete set isn't going to make it, but I'm sure we can identify
the key ones



>  * RM Fail-over via ZKFC
>  * Anything else?
>
>  HDFS???
>
>

   - If I had the time, I'd like to do some work on the HADOOP-9361
   filesystem spec & tests -this is mostly some specification, the basis of a
   better test framework for newer FS tests, and some more tests, with a
   couple of minor changes to some of the FS code, mainly in terms of
   tightening some of the exceptions thrown (IOE -> EOF)

otherwise:

   - I'd like the hadoop-openstack  JAR in; it's already in branch-2 so
   it's a matter of ensuring testing during the release against as many
   providers as possible.
   - There are a fair few JIRAs about updating versions of dependencies
   -the S3 JetS3t update went in this week, but there are more, as well as
   cruft in the POMs which shows up downstream. I think we could update the
   low-risk dependencies (test-time, log4j, &c), while avoiding those we know
   will be trouble (jetty). This may seem minor but it does make a big diff to
   the downstream projects.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #785

2013-11-08 Thread Apache Jenkins Server
See 

--
[...truncated 7692 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[270,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[281,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[6301,30]
 cannot find sym

Hadoop-Hdfs-0.23-Build - Build # 785 - Still Failing

2013-11-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/785/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7885 lines...]
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3313,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3319,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Class,java.lang.Class)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs

[jira] [Created] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)
Junping Du created HDFS-5481:


 Summary: Fix TestDataNodeVolumeFailure
 Key: HDFS-5481
 URL: https://issues.apache.org/jira/browse/HDFS-5481
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Junping Du
Assignee: Junping Du


In test case, it still use datanodeID to generate storage report. Replace with 
storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5482) CacheAdmin -removeDirectives fails on relative paths but -addDirective allows them

2013-11-08 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5482:
-

 Summary: CacheAdmin -removeDirectives fails on relative paths but 
-addDirective allows them
 Key: HDFS-5482
 URL: https://issues.apache.org/jira/browse/HDFS-5482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Stephen Chu


CacheAdmin -addDirective allows using a relative path.

However, -removeDirectives will error complaining with 
"java.net.URISyntaxException: Relative path in absolute URI"

{code}
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
Added PathBasedCache entry 3
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
Found 1 entry
ID  POOL  PATH   
3   schu  /user/schu/foo 
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
Exception in thread "main" java.lang.IllegalArgumentException: 
java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
at java.net.URI.checkPath(URI.java:1788)
at java.net.URI.(URI.java:734)
at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
... 4 more
[schu@hdfs-c5-nfs ~]$ 
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5483) Make reportDiff resilient to malformed block reports

2013-11-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5483:
---

 Summary: Make reportDiff resilient to malformed block reports
 Key: HDFS-5483
 URL: https://issues.apache.org/jira/browse/HDFS-5483
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal


{{BlockManager#reportDiff}} can cause an assertion failure in 
{{BlockInfo#moveBlockToHead}} if the block report shows the same block as 
belonging to more than one storage.

The issue is that {{moveBlockToHead}} assumes it will find the 
DatanodeStorageInfo for the given block.

Exception details:
{code}
java.lang.AssertionError: Index is out of bound
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.setNext(BlockInfo.java:152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.moveBlockToHead(BlockInfo.java:351)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.moveBlockToHead(DatanodeStorageInfo.java:243)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1841)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1709)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1637)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testVolumeFailure(TestDataNodeVolumeFailure.java:165)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5484) StorageType and State in DatanodeStorageInfo in NameNode is not accurate

2013-11-08 Thread Eric Sirianni (JIRA)
Eric Sirianni created HDFS-5484:
---

 Summary: StorageType and State in DatanodeStorageInfo in NameNode 
is not accurate
 Key: HDFS-5484
 URL: https://issues.apache.org/jira/browse/HDFS-5484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Eric Sirianni


The fields in DatanodeStorageInfo are updated from two distinct paths:
# block reports
# storage reports (via heartbeats)

The {{state}} and {{storageType}} fields are updated via the Block Report.  
However, as seen in the code blow, these fields are populated from a "dummy" 
{{DatanodeStorage}} object constructed in the DataNode:
{code}
BPServiceActor.blockReport() {
//...
// Dummy DatanodeStorage object just for sending the block report.
DatanodeStorage dnStorage = new DatanodeStorage(storageID);
//...
}
{code}

The net effect is that the {{state}} and {{storageType}} fields are always the 
default of {{NORMAL}} and {{DISK}} in the NameNode.

The recommended fix is to change {{FsDatasetSpi.getBlockReports()}} from:
{code}
public Map getBlockReports(String bpid);
{code}
to:
{code}
public Map getBlockReports(String bpid);
{code}
thereby allowing {{BPServiceActor}} to send the "real" {{DatanodeStorage}} 
object with the block report.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5172) Handle race condition for writes

2013-11-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-5172.
--

Resolution: Fixed

This issue is fixed along with the fix to HDFS-5364. Solve it as a dup.

> Handle race condition for writes
> 
>
> Key: HDFS-5172
> URL: https://issues.apache.org/jira/browse/HDFS-5172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> When an unstable write arrives, the following happens: 
> 1. retrieves the OpenFileCtx
> 2. create asyn task to write it to hdfs
> The race is that, the OpenFileCtx could be closed by the StreamMonitor. Then 
> step 2 will simply return an error to the client.
> This is OK before streaming is supported. To support data streaming, the file 
> needs to be reopened.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5485:
--

 Summary: add command-line support for modifyDirective
 Key: HDFS-5485
 URL: https://issues.apache.org/jira/browse/HDFS-5485
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5479) Fix test failures in Balancer.

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HDFS-5479.
--

Resolution: Duplicate

> Fix test failures in Balancer.
> --
>
> Key: HDFS-5479
> URL: https://issues.apache.org/jira/browse/HDFS-5479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>
> Many tests failures w.r.t balancer as 
> https://builds.apache.org/job/PreCommit-HDFS-Build/5360/#showFailuresLink 
> shows. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5486:
---

 Summary: Fix TestNameNodeMetrics
 Key: HDFS-5486
 URL: https://issues.apache.org/jira/browse/HDFS-5486
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The test assumes one block report per Datanode. We now send one block report 
per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5481.
-

   Resolution: Fixed
Fix Version/s: Heterogeneous Storage (HDFS-2832)
 Hadoop Flags: Reviewed

+1 for the patch. I committed it to branch HDFS-2832. Thanks for the 
contribution Junping!



> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5487:


 Summary: Refactor TestHftpDelegationToken into TestTokenAspect
 Key: HDFS-5487
 URL: https://issues.apache.org/jira/browse/HDFS-5487
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5487.000.patch

HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5488) Clean up TestHftpTimeout

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5488:


 Summary: Clean up TestHftpTimeout
 Key: HDFS-5488
 URL: https://issues.apache.org/jira/browse/HDFS-5488
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
connections. This jira cleans up TestHftpTimeout and merges its unit tests into 
TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5489:


 Summary: Use TokenAspect in WebHDFS
 Key: HDFS-5489
 URL: https://issues.apache.org/jira/browse/HDFS-5489
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem to 
handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Next releases

2013-11-08 Thread Chris Nauroth
Arun, what are your thoughts on test-only patches?  I know I've been
merging a lot of Windows test stabilization patches down to branch-2.2.
 These can't rightly be called blockers, but they do improve dev
experience, and there is no risk to product code.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Fri, Nov 8, 2013 at 1:30 AM, Steve Loughran wrote:

> On 8 November 2013 02:42, Arun C Murthy  wrote:
>
> > Gang,
> >
> >  Thinking through the next couple of releases here, appreciate f/b.
> >
> >  # hadoop-2.2.1
> >
> >  I was looking through commit logs and there is a *lot* of content here
> > (81 commits as on 11/7). Some are features/improvements and some are
> fixes
> > - it's really hard to distinguish what is important and what isn't.
> >
> >  I propose we start with a blank slate (i.e. blow away branch-2.2 and
> > start fresh from a copy of branch-2.2.0)  and then be very careful and
> > meticulous about including only *blocker* fixes in branch-2.2. So, most
> of
> > the content here comes via the next minor release (i.e. hadoop-2.3)
> >
> >  In future, we continue to be *very* parsimonious about what gets into a
> > patch release (major.minor.patch) - in general, these should be only
> > *blocker* fixes or key operational issues.
> >
>
> +1
>
>
> >
> >  # hadoop-2.3
> >
> >  I'd like to propose the following features for YARN/MR to make it into
> > hadoop-2.3 and punt the rest to hadoop-2.4 and beyond:
> >  * Application History Server - This is happening in  a branch and is
> > close; with it we can provide a reasonable experience for new frameworks
> > being built on top of YARN.
> >  * Bug-fixes in RM Restart
> >  * Minimal support for long-running applications (e.g. security) via
> > YARN-896
> >
>
> +1 -the complete set isn't going to make it, but I'm sure we can identify
> the key ones
>
>
>
> >  * RM Fail-over via ZKFC
> >  * Anything else?
> >
> >  HDFS???
> >
> >
>
>- If I had the time, I'd like to do some work on the HADOOP-9361
>filesystem spec & tests -this is mostly some specification, the basis
> of a
>better test framework for newer FS tests, and some more tests, with a
>couple of minor changes to some of the FS code, mainly in terms of
>tightening some of the exceptions thrown (IOE -> EOF)
>
> otherwise:
>
>- I'd like the hadoop-openstack  JAR in; it's already in branch-2 so
>it's a matter of ensuring testing during the release against as many
>providers as possible.
>- There are a fair few JIRAs about updating versions of dependencies
>-the S3 JetS3t update went in this week, but there are more, as well as
>cruft in the POMs which shows up downstream. I think we could update the
>low-risk dependencies (test-time, log4j, &c), while avoiding those we
> know
>will be trouble (jetty). This may seem minor but it does make a big
> diff to
>the downstream projects.
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.