Jenkins build is back to normal : Hadoop-Common-trunk #1754

2015-09-25 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #457

2015-09-25 Thread Apache Jenkins Server
See 



Re: Local repo sharing for maven builds

2015-09-25 Thread Vinayakumar B
Thanks Andrew,

May be we can try making it to 1 exec, and try for sometime. i think also
need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
dramatically by enabling parallel tests, HDFS and COMMON precommit builds
will not block other builds for much time.

To check, I dont have access to jenkins configuration. If I can get the
access I can reduce it myself and verify.


-Vinay

On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
wrote:

> Thanks for checking Vinay. As a temporary workaround, could we reduce the #
> of execs per node to 1? Our build queues are pretty short right now, so I
> don't think it would be too bad.
>
> Best,
> Andrew
>
> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
> wrote:
>
> > In case if we are going to have separate repo for each executor,
> >
> > I have checked, each jenkins node is allocated 2 executors. so we only
> need
> > to create one more replica.
> >
> > Regards,
> > Vinay
> >
> > On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
> > wrote:
> >
> > >
> > > > On 22 Sep 2015, at 16:39, Colin P. McCabe 
> wrote:
> > > >
> > > >> ANNOUNCEMENT: new patches which contain hard-coded ports in test
> runs
> > > will henceforth be reverted. Jenkins matters more than the 30s of your
> > time
> > > it takes to use the free port finder methods. Same for any hard code
> > paths
> > > in filesystems.
> > > >
> > > > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > > > vote on it first?
> > >
> > > I don't think we need to vote on it: hard code ports should be
> something
> > > we veto on patches anyway.
> > >
> > > In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
> having a
> > > better style guide in the docs.
> > >
> > >
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-25 Thread Andrew Wang
Thanks for checking Vinay. As a temporary workaround, could we reduce the #
of execs per node to 1? Our build queues are pretty short right now, so I
don't think it would be too bad.

Best,
Andrew

On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
wrote:

> In case if we are going to have separate repo for each executor,
>
> I have checked, each jenkins node is allocated 2 executors. so we only need
> to create one more replica.
>
> Regards,
> Vinay
>
> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
> wrote:
>
> >
> > > On 22 Sep 2015, at 16:39, Colin P. McCabe  wrote:
> > >
> > >> ANNOUNCEMENT: new patches which contain hard-coded ports in test runs
> > will henceforth be reverted. Jenkins matters more than the 30s of your
> time
> > it takes to use the free port finder methods. Same for any hard code
> paths
> > in filesystems.
> > >
> > > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > > vote on it first?
> >
> > I don't think we need to vote on it: hard code ports should be something
> > we veto on patches anyway.
> >
> > In https://issues.apache.org/jira/browse/HADOOP-12143 I propose having a
> > better style guide in the docs.
> >
> >
> >
>


Build failed in Jenkins: Hadoop-Common-trunk #1753

2015-09-25 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-9132. Pass genstamp to ReplicaAccessorBuilder. (Colin Patrick McCabe 
via Lei (Eddy) Xu)

[lei] HDFS-9133. ExternalBlockReader and ReplicaAccessor need to return -1 on 
read when at EOF. (Colin Patrick McCabe via Lei (Eddy) Xu)

--
[...truncated 5229 lines...]
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.776 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.435 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in 
org.apache.hadoop.metrics2.lib.TestUniqNames
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.24 sec - in 
org.apache.hadoop.metrics2.lib.TestInterns
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.508 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.473 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.365 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.358 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.748 sec - 
in org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.736 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.832 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.test.TestJUnitSetup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.246 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.299 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 sec - in 
org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.071 sec - in 
org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.254 sec - in 
org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.net.TestNetUtils
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.898 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.657 sec - in 
org.apache.hadoop.net.TestDNS
Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.892 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in 
org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Running org.apache.hadoop.net.TestClusterTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.364 sec - in 
org.apache.hadoop.net.TestClusterTopology
Running org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time el

Build failed in Jenkins: Hadoop-common-trunk-Java8 #456

2015-09-25 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9107. Prevent NN's unrecoverable death spiral after full GC 
(Daryn Sharp via Colin P. McCabe)

[cmccabe] Add HDFS-9107 to CHANGES.txt

[lei] HDFS-9132. Pass genstamp to ReplicaAccessorBuilder. (Colin Patrick McCabe 
via Lei (Eddy) Xu)

[lei] HDFS-9133. ExternalBlockReader and ReplicaAccessor need to return -1 on 
read when at EOF. (Colin Patrick McCabe via Lei (Eddy) Xu)

--
[...truncated 5607 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.841 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.923 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.445 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.717 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparators
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.008 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.886 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.602 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.055 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.552 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.41 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.361 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.102 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running or

Re: Planning for Apache Hadoop 2.6.2

2015-09-25 Thread Sangjin Lee
Per Vinod's suggestion, in order to reduce the amount of movement I'll pick
commits from branch-2.6 onto the tip of branch-2.6.1 rather than the other
way around. This means I'll need to move branch-2.6 and force push that
change.

Could you please hold off on committing to branch-2.6 until I am done
relocating the branch? I'll let you know when I'm done with that exercise.
I expect I'll be done with this in 24 hours or so. Let me know if you have
any concerns.

Thanks,
Sangjin

On Fri, Sep 25, 2015 at 12:53 PM, Sangjin Lee  wrote:

> Thanks folks. I'll get started on the items Vinod mentioned soon.
>
> If you have something you'd like to push for inclusion in 2.6.2, please
> mark the target version as 2.6.2.
>
> I'd like to ask one more thing on top of that. It would be AWESOME if you
> can check if it can be applied cleanly on top of 2.6.1, and if not, provide
> an updated patch suitable for 2.6.1. This will help speed up the 2.6.2
> release process tremendously. If the person who's recommending the JIRA for
> 2.6.2 inclusion can do it, that would be great. Help from the original
> contributor might be helpful as well. Thanks for your cooperation!
>
> Regards,
> Sangjin
>
> On Thu, Sep 24, 2015 at 9:39 PM, Akira AJISAKA  > wrote:
>
>> Thanks Vinod and Sangjin for releasing 2.6.1 and starting discussion for
>> 2.6.2!
>>
>> +1. If there's anything I can help you with, please tell me.
>>
>> Thanks,
>> Akira
>>
>>
>> On 9/25/15 13:23, Vinayakumar B wrote:
>>
>>> Thanks Vinod and Sangjin for making 2.6.1 release possible.
>>>
>>> Apologies for not getting time to verify and vote for the release.
>>>
>>> I will also be available to help for 2.6.2 if anything required.
>>>
>>> Thanks,
>>> Vinay
>>>
>>> On Fri, Sep 25, 2015 at 12:16 AM, Vinod Vavilapalli <
>>> vino...@hortonworks.com
>>>
 wrote:

>>>
>>> +1. Please take it over, I’ll standby for any help needed.

 Thanks
 +Vinod


 On Sep 24, 2015, at 11:34 AM, Sangjin Lee >>> sj...@apache.org>> wrote:

 I'd like to volunteer as the release manager for 2.6.2 unless there is
 an
 objection.



>>>
>>
>


Build failed in Jenkins: Hadoop-Common-trunk #1752

2015-09-25 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-9112. Improve error message for Haadmin when multiple name service 
IDs are configured. Contributed by Anu Engineer.

[rkanter] MAPREDUCE-6480. archive-logs tool may miss applications (rkanter)

[cmccabe] HDFS-9107. Prevent NN's unrecoverable death spiral after full GC 
(Daryn Sharp via Colin P. McCabe)

[cmccabe] Add HDFS-9107 to CHANGES.txt

--
[...truncated 8754 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


[jira] [Created] (HADOOP-12441) Fix kill command execution under Ubuntu 12

2015-09-25 Thread Wangda Tan (JIRA)
Wangda Tan created HADOOP-12441:
---

 Summary: Fix kill command execution under Ubuntu 12
 Key: HADOOP-12441
 URL: https://issues.apache.org/jira/browse/HADOOP-12441
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wangda Tan
Priority: Blocker


After HADOOP-12317, kill command's execution will be failure under Ubuntu12. 
After NM restarts, it cannot get if a process is alive or not via pid of 
containers, and it cannot kill process correctly when RM/AM tells NM to kill a 
container.

Logs from NM (customized logs):
{code}
2015-09-25 21:58:59,348 INFO  nodemanager.DefaultContainerExecutor 
(DefaultContainerExecutor.java:containerIsAlive(431)) -  == 
check alive cmd:[[Ljava.lang.String;@496e442d]
2015-09-25 21:58:59,349 INFO  nodemanager.NMAuditLogger 
(NMAuditLogger.java:logSuccess(89)) - USER=hrt_qa   IP=10.0.1.14
OPERATION=Stop Container RequestTARGET=ContainerManageImpl  
RESULT=SUCCESS  APPID=application_1443218269460_0001
CONTAINERID=container_1443218269460_0001_01_01
2015-09-25 21:58:59,363 INFO  nodemanager.DefaultContainerExecutor 
(DefaultContainerExecutor.java:containerIsAlive(438)) -  
===
ExitCodeException exitCode=1: ERROR: garbage process ID "--".
Usage:
  kill pid ...  Send SIGTERM to every process listed.
  kill signal pid ...   Send a signal to every process listed.
  kill -s signal pid ...Send a signal to every process listed.
  kill -l   List all signal names.
  kill -L   List all signal names in a nice table.
  kill -l signalConvert between signal numbers and names.

at org.apache.hadoop.util.Shell.runCommand(Shell.java:550)
at org.apache.hadoop.util.Shell.run(Shell.java:461)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:727)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.containerIsAlive(DefaultContainerExecutor.java:432)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.signalContainer(DefaultContainerExecutor.java:401)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java:419)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:139)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:175)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:108)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12440) TestRPC#testRPCServerShutdown did not produce the desired thread states before shutting down

2015-09-25 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-12440:
--

 Summary: TestRPC#testRPCServerShutdown did not produce the desired 
thread states before shutting down
 Key: HADOOP-12440
 URL: https://issues.apache.org/jira/browse/HADOOP-12440
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Planning Apache Hadoop 2.7.2

2015-09-25 Thread Vinod Kumar Vavilapalli
Hi all,

We released 2.7.1 nearly 2.5 months ago. I got caught up with a very long
release process for 2.6.1 so couldn't make progress on a 2.7.2. Now is the
time!

Things to do

 (#1) Branch
-- Branch 2.7 has been open to 2.7.2 commits for a while.
-- In order to converge on a release, I will branch out 2.7.2 soon.

  (#2) Patches
-- 2.7.2 already has a boat load [1] of fixes.
-- The list of open blocker / critical tickets [2] is not small. I'll
start triaging and see what can make it in a week or so of time.

  (#3) Release
-- Even if we can get half of the blocker / critical tickets in, the
full list [3] will be big enough for us to start voting on an RC in no less
than a week.
-- Leaving aside some buffer time, I plan to start RC process by end of
first week of October.

Thoughts?

Appreciate help in moving open tickets [2] forward.

A general note:  Please consider putting any critical / blocker tickets on
2.8 into 2.6.2 and 2.7.2 releases.

Thanks
+Vinod

[1] 2.7.2 Fixed Tickets:
https://issues.apache.org/jira/issues/?filter=12333473
[2] 2.7.2 Open Blockers / Critical Tickets:
https://issues.apache.org/jira/issues/?filter=12332867
[3] 2.7.2 Release Tickets:
https://issues.apache.org/jira/issues/?filter=12333461


Re: Planning for Apache Hadoop 2.6.2

2015-09-25 Thread Sangjin Lee
Thanks folks. I'll get started on the items Vinod mentioned soon.

If you have something you'd like to push for inclusion in 2.6.2, please
mark the target version as 2.6.2.

I'd like to ask one more thing on top of that. It would be AWESOME if you
can check if it can be applied cleanly on top of 2.6.1, and if not, provide
an updated patch suitable for 2.6.1. This will help speed up the 2.6.2
release process tremendously. If the person who's recommending the JIRA for
2.6.2 inclusion can do it, that would be great. Help from the original
contributor might be helpful as well. Thanks for your cooperation!

Regards,
Sangjin

On Thu, Sep 24, 2015 at 9:39 PM, Akira AJISAKA 
wrote:

> Thanks Vinod and Sangjin for releasing 2.6.1 and starting discussion for
> 2.6.2!
>
> +1. If there's anything I can help you with, please tell me.
>
> Thanks,
> Akira
>
>
> On 9/25/15 13:23, Vinayakumar B wrote:
>
>> Thanks Vinod and Sangjin for making 2.6.1 release possible.
>>
>> Apologies for not getting time to verify and vote for the release.
>>
>> I will also be available to help for 2.6.2 if anything required.
>>
>> Thanks,
>> Vinay
>>
>> On Fri, Sep 25, 2015 at 12:16 AM, Vinod Vavilapalli <
>> vino...@hortonworks.com
>>
>>> wrote:
>>>
>>
>> +1. Please take it over, I’ll standby for any help needed.
>>>
>>> Thanks
>>> +Vinod
>>>
>>>
>>> On Sep 24, 2015, at 11:34 AM, Sangjin Lee >> sj...@apache.org>> wrote:
>>>
>>> I'd like to volunteer as the release manager for 2.6.2 unless there is an
>>> objection.
>>>
>>>
>>>
>>
>


Re: reviews for HADOOP-12178

2015-09-25 Thread Zhihai Xu
Yes, I just did a quick review for HADOOP-12178
.

Regards
zhihai

On Fri, Sep 25, 2015 at 11:13 AM, Steve Loughran 
wrote:

> Can I get a quick review of :
> https://issues.apache.org/jira/browse/HADOOP-12178
>
> I'm talking about hadoop & kerberos next week and I'd like to have less
> open patches related to reporting problems in the security codebase
>
> thanks
>
> -steve
>


reviews for HADOOP-12178

2015-09-25 Thread Steve Loughran
Can I get a quick review of :
https://issues.apache.org/jira/browse/HADOOP-12178

I'm talking about hadoop & kerberos next week and I'd like to have less open 
patches related to reporting problems in the security codebase

thanks

-steve