Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Thanks Robert and Haibo for quickly correcting same.
Sigh, I somehow missed one file while committing the change. Sorry for the
trouble.

- Sunil

On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter  wrote:

> Looks like there's two that weren't updated:
> >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
> --include=pom.xml
> ./hadoop-project/pom.xml:
> 3.2.0-SNAPSHOT
> ./pom.xml:3.2.0-SNAPSHOT
>
> I've just pushed in an addendum commit to fix those.
> In the future, please make sure to do a sanity compile when updating poms.
>
> thanks
> - Robert
>
> On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri 
> wrote:
>
>> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>> top-level pom.xml?
>>
>>
>> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>>
>> > Hi All
>> >
>> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
>> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
>> > patches are pulled into branch-3.2.
>> >
>> > Thank You
>> > - Sunil
>> >
>> >
>> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>> >
>> > > Hi All
>> > >
>> > > We are now down to the last Blocker and HADOOP-15407 is merged to
>> trunk.
>> > > Thanks for the support.
>> > >
>> > > *Plan for RC*
>> > > 3.2 branch cut and reset trunk : *25th Tuesday*
>> > > RC0 for 3.2: *28th Friday*
>> > >
>> > > Thank You
>> > > Sunil
>> > >
>> > >
>> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>> > >
>> > >> Hi All
>> > >>
>> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> > >> helping in this. I am following up on these tickets, once its closed
>> we
>> > >> will cut the 3.2 branch.
>> > >>
>> > >> Thanks
>> > >> Sunil Govindan
>> > >>
>> > >>
>> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>> > >>
>> > >>> Hi All,
>> > >>>
>> > >>> Inline with the original 3.2 communication proposal dated 17th July
>> > >>> 2018, I would like to provide more updates.
>> > >>>
>> > >>> We are approaching previously proposed code freeze date (September
>> 14,
>> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
>> > existing
>> > >>> trunk to 3.3 if there are no issues.
>> > >>>
>> > >>> *Current Release Plan:*
>> > >>> Feature freeze date : all features to merge by September 7, 2018.
>> > >>> Code freeze date : blockers/critical only, no improvements and
>> > >>> blocker/critical bug-fixes September 14, 2018.
>> > >>> Release date: September 28, 2018
>> > >>>
>> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we
>> need to
>> > >>> backport to 3.2 post branch cut.
>> > >>>
>> > >>> Here's an updated 3.2.0 feature status:
>> > >>>
>> > >>> 1. Merged & Completed features:
>> > >>>
>> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>> > >>> workloads Initial cut.
>> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>> > >>> Scheduler.
>> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
>> > API
>> > >>> and CLI.
>> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
>> works.
>> > >>>
>> > >>> 2. Features close to finish:
>> > >>>
>> > >>> - (Steve) S3Guard Phase III. Close to commit.
>> > >>> - (Steve) S3a phase V. Close to commit.
>> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
>> > >>>
>> > >>> 3. Tentative/Cancelled features for 3.2:
>> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>> > >>> ATSv2. Patch in progress.
>> > >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks
>> challenging to
>> > >>> be done before Aug 2018.
>> > >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>> > >>> Challenging as more discussions are on-going.
>> > >>>
>> > >>> *Summary of 3.2.0 issues status:*
>> > >>> 19 Blocker and Critical issues [1] are open, I am following up with
>> > >>> owners to get status on each of them to get in by Code Freeze date.
>> > >>>
>> > >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
>> > (Blocker,
>> > >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
>> > ORDER
>> > >>> BY priority DESC
>> > >>>
>> > >>> Thanks,
>> > >>> Sunil
>> > >>>
>> > >>>
>> > >>>
>> > >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>> > >>>
>> >  Hi All,
>> > 
>> >  Inline with earlier communication dated 17th July 2018, I would
>> like
>> > to
>> >  provide some updates.
>> > 
>> >  We are approaching previously proposed code freeze date (Aug 31).
>> > 
>> >  One of the critical feature Node Attributes feature merge
>> >  discussion/vote is ongoing. Also few other Blocker bugs need a bit
>> > more
>> >  time. With regard to this, suggesting to push the feature/code
>> freeze
>> > for 2
>> >  more weeks to accommodate these jiras too.
>> > 
>> >  

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Robert Kanter
Looks like there's two that weren't updated:
>> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
--include=pom.xml
./hadoop-project/pom.xml:
3.2.0-SNAPSHOT
./pom.xml:3.2.0-SNAPSHOT

I've just pushed in an addendum commit to fix those.
In the future, please make sure to do a sanity compile when updating poms.

thanks
- Robert

On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri 
wrote:

> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
> top-level pom.xml?
>
>
> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>
> > Hi All
> >
> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
> > patches are pulled into branch-3.2.
> >
> > Thank You
> > - Sunil
> >
> >
> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
> >
> > > Hi All
> > >
> > > We are now down to the last Blocker and HADOOP-15407 is merged to
> trunk.
> > > Thanks for the support.
> > >
> > > *Plan for RC*
> > > 3.2 branch cut and reset trunk : *25th Tuesday*
> > > RC0 for 3.2: *28th Friday*
> > >
> > > Thank You
> > > Sunil
> > >
> > >
> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
> > >
> > >> Hi All
> > >>
> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
> > >> helping in this. I am following up on these tickets, once its closed
> we
> > >> will cut the 3.2 branch.
> > >>
> > >> Thanks
> > >> Sunil Govindan
> > >>
> > >>
> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
> > >>
> > >>> Hi All,
> > >>>
> > >>> Inline with the original 3.2 communication proposal dated 17th July
> > >>> 2018, I would like to provide more updates.
> > >>>
> > >>> We are approaching previously proposed code freeze date (September
> 14,
> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
> > existing
> > >>> trunk to 3.3 if there are no issues.
> > >>>
> > >>> *Current Release Plan:*
> > >>> Feature freeze date : all features to merge by September 7, 2018.
> > >>> Code freeze date : blockers/critical only, no improvements and
> > >>> blocker/critical bug-fixes September 14, 2018.
> > >>> Release date: September 28, 2018
> > >>>
> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we need
> to
> > >>> backport to 3.2 post branch cut.
> > >>>
> > >>> Here's an updated 3.2.0 feature status:
> > >>>
> > >>> 1. Merged & Completed features:
> > >>>
> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
> > >>> workloads Initial cut.
> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
> > >>> Scheduler.
> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
> > API
> > >>> and CLI.
> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
> works.
> > >>>
> > >>> 2. Features close to finish:
> > >>>
> > >>> - (Steve) S3Guard Phase III. Close to commit.
> > >>> - (Steve) S3a phase V. Close to commit.
> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
> > >>>
> > >>> 3. Tentative/Cancelled features for 3.2:
> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
> > >>> ATSv2. Patch in progress.
> > >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging
> to
> > >>> be done before Aug 2018.
> > >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
> > >>> Challenging as more discussions are on-going.
> > >>>
> > >>> *Summary of 3.2.0 issues status:*
> > >>> 19 Blocker and Critical issues [1] are open, I am following up with
> > >>> owners to get status on each of them to get in by Code Freeze date.
> > >>>
> > >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
> > (Blocker,
> > >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
> > ORDER
> > >>> BY priority DESC
> > >>>
> > >>> Thanks,
> > >>> Sunil
> > >>>
> > >>>
> > >>>
> > >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
> > >>>
> >  Hi All,
> > 
> >  Inline with earlier communication dated 17th July 2018, I would like
> > to
> >  provide some updates.
> > 
> >  We are approaching previously proposed code freeze date (Aug 31).
> > 
> >  One of the critical feature Node Attributes feature merge
> >  discussion/vote is ongoing. Also few other Blocker bugs need a bit
> > more
> >  time. With regard to this, suggesting to push the feature/code
> freeze
> > for 2
> >  more weeks to accommodate these jiras too.
> > 
> >  Proposing Updated changes in plan inline with this:
> >  Feature freeze date : all features to merge by September 7, 2018.
> >  Code freeze date : blockers/critical only, no improvements and
> >   blocker/critical bug-fixes September 14, 2018.
> >  Release date: September 28, 2018
> > 
> >  If any features in branch which are targeted to 

[jira] [Resolved] (HDFS-13954) Add missing cleanupSSLConfig() call for TestTimelineClient test

2018-10-02 Thread Aki Tanaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aki Tanaka resolved HDFS-13954.
---
Resolution: Fixed

I created this issue in a wrong project. sorry.

> Add missing cleanupSSLConfig() call for TestTimelineClient test
> ---
>
> Key: HDFS-13954
> URL: https://issues.apache.org/jira/browse/HDFS-13954
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Aki Tanaka
>Priority: Minor
>
> Tests that setup SSLConfigs can leave conf-files lingering unless they are 
> cleaned up via {{KeyStoreTestUtil.cleanupSSLConfig}} call. TestTimelineClient 
> test is missing this call.
> If the cleanup method is not called explicitly, a modified ssl-client.xml is 
> left in {{test-classes}}, might affect to subsequent test cases.
>  
> There was a similar report in HDFS-11042, but looks that we need to fix 
> TestTimelineClient test too.
>  
> {code:java}
> $ mvn test -Dtest=TestTimelineClient
> $ find .|grep ssl-client.xml$
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
> $ cat 
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
> 
> ssl.client.truststore.reload.interval1000falseprogrammatically
> ssl.client.truststore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/trustKS.jksfalseprogrammatically
> ssl.client.keystore.keypasswordclientPfalseprogrammatically
> ssl.client.keystore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/clientKS.jksfalseprogrammatically
> ssl.client.truststore.passwordtrustPfalseprogrammatically
> ssl.client.keystore.passwordclientPfalseprogrammatically
> {code}
>  
> After applying this patch, the ssl-client.xml is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13954) Add missing cleanupSSLConfig() call for TestTimelineClient test

2018-10-02 Thread Aki Tanaka (JIRA)
Aki Tanaka created HDFS-13954:
-

 Summary: Add missing cleanupSSLConfig() call for 
TestTimelineClient test
 Key: HDFS-13954
 URL: https://issues.apache.org/jira/browse/HDFS-13954
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Aki Tanaka


Tests that setup SSLConfigs can leave conf-files lingering unless they are 
cleaned up via {{KeyStoreTestUtil.cleanupSSLConfig}} call. TestTimelineClient 
test is missing this call.

If the cleanup method is not called explicitly, a modified ssl-client.xml is 
left in {{test-classes}}, might affect to subsequent test cases.

 

There was a similar report in HDFS-11042, but looks that we need to fix 
TestTimelineClient test too.

 
{code:java}
$ mvn test -Dtest=TestTimelineClient
$ find .|grep ssl-client.xml$
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml
$ cat 
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-classes/ssl-client.xml

ssl.client.truststore.reload.interval1000falseprogrammatically
ssl.client.truststore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/trustKS.jksfalseprogrammatically
ssl.client.keystore.keypasswordclientPfalseprogrammatically
ssl.client.keystore.location/Users/tanakah/work/hadoop-2.8.5/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/test-dir/clientKS.jksfalseprogrammatically
ssl.client.truststore.passwordtrustPfalseprogrammatically
ssl.client.keystore.passwordclientPfalseprogrammatically
{code}
 

After applying this patch, the ssl-client.xml is not generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13953) Failure of last datanode in the pipeline results in block recovery failure and subsequent NPE during fsck

2018-10-02 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HDFS-13953:
---

 Summary: Failure of last datanode in the pipeline results in block 
recovery failure and subsequent NPE during fsck
 Key: HDFS-13953
 URL: https://issues.apache.org/jira/browse/HDFS-13953
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Hrishikesh Gadre


A user reported following scenario,
 * HBase region server created WAL and attempted to write
 * As part of the pipeline write, following events happened,
 ** The last data node in the pipeline failed. 
 ** The region server could not identify this last data node as the root cause 
of write failure and instead reported NN the first data node in the pipeline as 
the cause of failure.
 ** NN created a new write pipeline by replacing the good data node and 
retaining the faulty data node.
 ** This process continued for three iterations until NN encountered an NPE.
 * Now the fsck on the /bhase directory also failing due to NPE in NN

Following stack traces was found in region server logs
{noformat}
WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeStaleReplicas(BlockManager.java:3238)
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.updateLastBlock(BlockManager.java:3633)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:7374)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:7339)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:777)
  at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updatePipeline(AuthorizationProviderProxyClientProtocol.java:654){noformat}
 

AND

 
{noformat}
WARN org.apache.hadoop.hbase.util.FSHDFSUtils: attempt=0 on 
file=hdfs://nameservice1/hbase/genie/WALs/hbasedn193.pv08.siri.apple.com,60020,1525325654855-splitting/hbasedn193.pv08.siri.apple.com%2C60020%2C1525325654855.null0.1536002440010
 after 6ms
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction$ReplicaUnderConstruction.isAlive(BlockInfoUnderConstruction.java:121)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.initializeBlockRecovery(BlockInfoUnderConstruction.java:288)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:4846)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3252)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:3196)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:630)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.recoverLease(AuthorizationProviderProxyClientProtocol.java:372)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:681)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073){noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13952) Update hadoop.version in the trunk, which is causing compilation failure

2018-10-02 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13952:
-

 Summary: Update hadoop.version in the trunk, which is causing 
compilation failure
 Key: HDFS-13952
 URL: https://issues.apache.org/jira/browse/HDFS-13952
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


3.2.0-SNAPSHOT to

3.3.0-SNAPSHOT

 

On trunk compilation failure:

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireProperty failed with 
message:
The hadoop.version property should be set and should be 3.3.0-SNAPSHOT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Aaron Fabbri
Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
top-level pom.xml?


On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:

> Hi All
>
> As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
> 3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
> patches are pulled into branch-3.2.
>
> Thank You
> - Sunil
>
>
> On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>
> > Hi All
> >
> > We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
> > Thanks for the support.
> >
> > *Plan for RC*
> > 3.2 branch cut and reset trunk : *25th Tuesday*
> > RC0 for 3.2: *28th Friday*
> >
> > Thank You
> > Sunil
> >
> >
> > On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
> >
> >> Hi All
> >>
> >> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
> >> helping in this. I am following up on these tickets, once its closed we
> >> will cut the 3.2 branch.
> >>
> >> Thanks
> >> Sunil Govindan
> >>
> >>
> >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
> >>
> >>> Hi All,
> >>>
> >>> Inline with the original 3.2 communication proposal dated 17th July
> >>> 2018, I would like to provide more updates.
> >>>
> >>> We are approaching previously proposed code freeze date (September 14,
> >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
> existing
> >>> trunk to 3.3 if there are no issues.
> >>>
> >>> *Current Release Plan:*
> >>> Feature freeze date : all features to merge by September 7, 2018.
> >>> Code freeze date : blockers/critical only, no improvements and
> >>> blocker/critical bug-fixes September 14, 2018.
> >>> Release date: September 28, 2018
> >>>
> >>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
> >>> backport to 3.2 post branch cut.
> >>>
> >>> Here's an updated 3.2.0 feature status:
> >>>
> >>> 1. Merged & Completed features:
> >>>
> >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
> >>> workloads Initial cut.
> >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
> >>> Scheduler.
> >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
> API
> >>> and CLI.
> >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
> >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
> >>>
> >>> 2. Features close to finish:
> >>>
> >>> - (Steve) S3Guard Phase III. Close to commit.
> >>> - (Steve) S3a phase V. Close to commit.
> >>> - (Steve) Support Windows Azure Storage. Close to commit.
> >>>
> >>> 3. Tentative/Cancelled features for 3.2:
> >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
> >>> ATSv2. Patch in progress.
> >>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
> >>> be done before Aug 2018.
> >>> - (Eric) YARN-7129: Application Catalog for YARN applications.
> >>> Challenging as more discussions are on-going.
> >>>
> >>> *Summary of 3.2.0 issues status:*
> >>> 19 Blocker and Critical issues [1] are open, I am following up with
> >>> owners to get status on each of them to get in by Code Freeze date.
> >>>
> >>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in
> (Blocker,
> >>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
> ORDER
> >>> BY priority DESC
> >>>
> >>> Thanks,
> >>> Sunil
> >>>
> >>>
> >>>
> >>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
> >>>
>  Hi All,
> 
>  Inline with earlier communication dated 17th July 2018, I would like
> to
>  provide some updates.
> 
>  We are approaching previously proposed code freeze date (Aug 31).
> 
>  One of the critical feature Node Attributes feature merge
>  discussion/vote is ongoing. Also few other Blocker bugs need a bit
> more
>  time. With regard to this, suggesting to push the feature/code freeze
> for 2
>  more weeks to accommodate these jiras too.
> 
>  Proposing Updated changes in plan inline with this:
>  Feature freeze date : all features to merge by September 7, 2018.
>  Code freeze date : blockers/critical only, no improvements and
>   blocker/critical bug-fixes September 14, 2018.
>  Release date: September 28, 2018
> 
>  If any features in branch which are targeted to 3.2.0, please reply to
>  this email thread.
> 
>  *Here's an updated 3.2.0 feature status:*
> 
>  1. Merged & Completed features:
> 
>  - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>  workloads Initial cut.
>  - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>  - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>  Scheduler.
>  - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
>  API and CLI.
> 
>  2. Features close to finish:
> 
>  - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
>  Ongoing.
>  - (Rohith) YARN-5742: Serve 

Re: Hadoop 3.2 Release Plan proposal

2018-10-02 Thread Sunil G
Hi All

As mentioned in earlier mail, I have cut branch-3.2 and reset trunk to
3.3.0-SNAPSHOT. I will share the RC details sooner once all necessary
patches are pulled into branch-3.2.

Thank You
- Sunil


On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:

> Hi All
>
> We are now down to the last Blocker and HADOOP-15407 is merged to trunk.
> Thanks for the support.
>
> *Plan for RC*
> 3.2 branch cut and reset trunk : *25th Tuesday*
> RC0 for 3.2: *28th Friday*
>
> Thank You
> Sunil
>
>
> On Mon, Sep 17, 2018 at 3:21 PM Sunil G  wrote:
>
>> Hi All
>>
>> We are down to 3 Blockers and 4 Critical now. Thanks all of you for
>> helping in this. I am following up on these tickets, once its closed we
>> will cut the 3.2 branch.
>>
>> Thanks
>> Sunil Govindan
>>
>>
>> On Wed, Sep 12, 2018 at 5:10 PM Sunil G  wrote:
>>
>>> Hi All,
>>>
>>> Inline with the original 3.2 communication proposal dated 17th July
>>> 2018, I would like to provide more updates.
>>>
>>> We are approaching previously proposed code freeze date (September 14,
>>> 2018). So I would like to cut 3.2 branch on 17th Sept and point existing
>>> trunk to 3.3 if there are no issues.
>>>
>>> *Current Release Plan:*
>>> Feature freeze date : all features to merge by September 7, 2018.
>>> Code freeze date : blockers/critical only, no improvements and
>>> blocker/critical bug-fixes September 14, 2018.
>>> Release date: September 28, 2018
>>>
>>> If any critical/blocker tickets which are targeted to 3.2.0, we need to
>>> backport to 3.2 post branch cut.
>>>
>>> Here's an updated 3.2.0 feature status:
>>>
>>> 1. Merged & Completed features:
>>>
>>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>>> workloads Initial cut.
>>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>>> Scheduler.
>>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
>>> and CLI.
>>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement works.
>>>
>>> 2. Features close to finish:
>>>
>>> - (Steve) S3Guard Phase III. Close to commit.
>>> - (Steve) S3a phase V. Close to commit.
>>> - (Steve) Support Windows Azure Storage. Close to commit.
>>>
>>> 3. Tentative/Cancelled features for 3.2:
>>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
>>> ATSv2. Patch in progress.
>>> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
>>> be done before Aug 2018.
>>> - (Eric) YARN-7129: Application Catalog for YARN applications.
>>> Challenging as more discussions are on-going.
>>>
>>> *Summary of 3.2.0 issues status:*
>>> 19 Blocker and Critical issues [1] are open, I am following up with
>>> owners to get status on each of them to get in by Code Freeze date.
>>>
>>> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
>>> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
>>> BY priority DESC
>>>
>>> Thanks,
>>> Sunil
>>>
>>>
>>>
>>> On Thu, Aug 30, 2018 at 9:59 PM Sunil G  wrote:
>>>
 Hi All,

 Inline with earlier communication dated 17th July 2018, I would like to
 provide some updates.

 We are approaching previously proposed code freeze date (Aug 31).

 One of the critical feature Node Attributes feature merge
 discussion/vote is ongoing. Also few other Blocker bugs need a bit more
 time. With regard to this, suggesting to push the feature/code freeze for 2
 more weeks to accommodate these jiras too.

 Proposing Updated changes in plan inline with this:
 Feature freeze date : all features to merge by September 7, 2018.
 Code freeze date : blockers/critical only, no improvements and
  blocker/critical bug-fixes September 14, 2018.
 Release date: September 28, 2018

 If any features in branch which are targeted to 3.2.0, please reply to
 this email thread.

 *Here's an updated 3.2.0 feature status:*

 1. Merged & Completed features:

 - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
 workloads Initial cut.
 - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
 - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
 Scheduler.
 - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service
 API and CLI.

 2. Features close to finish:

 - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
 Ongoing.
 - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
 ATSv2. Patch in progress.
 - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
 - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure
 Storage. In progress.

 3. Tentative features:

 - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
 be done before Aug 2018.
 - (Eric) YARN-7129: Application 

[jira] [Created] (HDDS-567) Rename Mapping to ContainerManager in SCM

2018-10-02 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-567:


 Summary: Rename Mapping to ContainerManager in SCM
 Key: HDDS-567
 URL: https://issues.apache.org/jira/browse/HDDS-567
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


In SCM we have an interface named {{Mapping}} which is actually for container 
management, it is better to rename this interface as {{ContainerManager}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/

[Oct 1, 2018 8:20:17 AM] (nanda) HDDS-325. Add event watcher for delete blocks 
command. Contributed by
[Oct 1, 2018 6:21:26 PM] (ajay) HDDS-557. DeadNodeHandler should handle 
exception from
[Oct 1, 2018 8:12:38 PM] (gifuma) YARN-8760. [AMRMProxy] Fix concurrent 
re-register due to YarnRM failover
[Oct 1, 2018 8:16:08 PM] (bharat) HDDS-525. Support virtual-hosted style URLs. 
Contributed by Bharat
[Oct 1, 2018 9:46:42 PM] (haibochen) YARN-8621. Add test coverage of custom 
Resource Types for the
[Oct 1, 2018 10:04:20 PM] (bharat) HDDS-562. Create acceptance test to test aws 
cli with the s3 gateway.
[Oct 2, 2018 12:49:48 AM] (tasanuma) HDFS-13943. [JDK10] Fix javadoc errors in 
hadoop-hdfs-client module.
[Oct 2, 2018 1:43:14 AM] (yqlin) HDFS-13768. Adding replicas to volume map 
makes DataNode start slowly.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 123] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.yarn.server.nodemanager.containermanager.TestNMProxy 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/914/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   

[jira] [Created] (HDFS-13951) HDFS DelegationTokenFetcher can't print non-HDFS tokens in a tokenfile

2018-10-02 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-13951:
-

 Summary: HDFS DelegationTokenFetcher can't print non-HDFS tokens 
in a tokenfile
 Key: HDFS-13951
 URL: https://issues.apache.org/jira/browse/HDFS-13951
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


the fetchdt command can fetch tokens for filesystems other than hdfs (s3a, 
abfs, etc), but it can't print them, as it assumes all tokens in the file are 
subclasses of 
{{org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier}} 
& uses this fact in its decoding. It deserializes the token byte array without 
checking kind and so ends up with invalid data.

Fix: ask the tokens to decode themselves; only call toStableString() if an HDFS 
token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13950) ACL documentation update to indicate that ACL entries are capped by 32

2018-10-02 Thread Adam Antal (JIRA)
Adam Antal created HDFS-13950:
-

 Summary: ACL documentation update to indicate that ACL entries are 
capped by 32
 Key: HDFS-13950
 URL: https://issues.apache.org/jira/browse/HDFS-13950
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Adam Antal
Assignee: Adam Antal


The hadoop documentation does not contain the information that the ACL entries 
of a file or dir are capped by 32. My proposal is to add a single line to the 
md file informing the users about this.

Remark: this is indeed the maximum as (from AclTransformation.java)

 
{code:java}
private static final int MAX_ENTRIES = 32;{code}
is set as such.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org