Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-24 Thread Akira AJISAKA

Thanks Vinod for starting the discussion.
I'm +1 for cherry-picking these issues to 2.7.2.

As Andrew said, when users upgrade Hadoop from 2.6.3 to 2.7.2,
they can hit the issues.
I'm thinking we should reduce the regressions as possible.

Regards,
Akira

On 12/25/15 01:56, Andrew Wang wrote:

My 2c is that we should have monotonicity in releases. That way no
"upgrade" is a regression.

On Wed, Dec 23, 2015 at 10:00 PM, Tsuyoshi Ozawa  wrote:


Hi Vinod,

thank you for the clarification.


  - Pull these 16 tickets into 2.7.2 and roll a new RC

What do people think? Do folks expect “any fix in 2.6.3 to be there in

all releases that get out after 2.6.3 release date (December 16th)”?

I personally prefer to pull these tickets into 2.7.2 since it's
intuitive for me. I can help to cherrypick these tickets into 2.7.2
once we decide to do so.

This conflicts happened since the the timings of cutting branches and
actual release are crossed. We can face these situations usually in
the future since we have 2 or more branches for stable releases.
Hence, it's a good time to decide basic policy now.

BTW, should we start to discuss on new thread or continue to discuss here?

Thanks,
- Tsuyoshi

On Thu, Dec 24, 2015 at 9:47 AM, Vinod Kumar Vavilapalli
 wrote:

I retract my -1. I think we will need to discuss this a bit more.

Beyond those two tickets, there are a bunch more (totaling to 16) that

are in 2.6.3 but *not* in 2.7.2. See this:
https://issues.apache.org/jira/issues/?jql=key%20in%20%28HADOOP-12526%2CHADOOP-12413%2CHADOOP-11267%2CHADOOP-10668%2CHADOOP-10134%2CYARN-4434%2CYARN-4365%2CYARN-4348%2CYARN-4344%2CYARN-4326%2CYARN-4241%2CYARN-2859%2CMAPREDUCE-6549%2CMAPREDUCE-6540%2CMAPREDUCE-6377%2CMAPREDUCE-5883%2CHDFS-9431%2CHDFS-9289%2CHDFS-8615%29%20and%20fixVersion%20!%3D%202.7.0
<
https://issues.apache.org/jira/issues/?jql=key%20in%20(HADOOP-12526,HADOOP-12413,HADOOP-11267,HADOOP-10668,HADOOP-10134,YARN-4434,YARN-4365,YARN-4348,YARN-4344,YARN-4326,YARN-4241,YARN-2859,MAPREDUCE-6549,MAPREDUCE-6540,MAPREDUCE-6377,MAPREDUCE-5883,HDFS-9431,HDFS-9289,HDFS-8615)%20and%20fixVersion%20!=%202.7.0



Two options here, depending on the importance of ‘causality' between

2.6.x and 2.7.x lines.

  - Ship 2.7.2 as we voted on here
  - Pull these 16 tickets into 2.7.2 and roll a new RC

What do people think? Do folks expect “any fix in 2.6.3 to be there in

all releases that get out after 2.6.3 release date (December 16th)”?


Thanks
+Vinod


On Dec 23, 2015, at 12:37 PM, Vinod Kumar Vavilapalli <

vino...@apache.org> wrote:


Sigh. Missed this.

To retain causality ("any fix in 2.6.3 will be there in all releases

that got out after 2.6.3”), I’ll get these patches in.


Reverting my +1, and casting -1 for the RC myself.

Will spin a new RC, this voting thread is marked dead.

Thanks
+Vinod


On Dec 22, 2015, at 8:24 AM, Junping Du 
j...@hortonworks.com>> wrote:


However, when I look at our commit log and CHANGES.txt, I found

something we are missing:

1. HDFS-9470 and YARN-4424 are missing from the 2.7.2 branch and RC1

tag.

2. HADOOP-5323, HDFS-8767 are missing in CHANGE.txt












[jira] [Resolved] (HADOOP-12676) Inconsistent assumptions of the default keytab file of Kerberos

2015-12-24 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu resolved HADOOP-12676.
-
Resolution: Invalid

> Inconsistent assumptions of the default keytab file of Kerberos
> ---
>
> Key: HADOOP-12676
> URL: https://issues.apache.org/jira/browse/HADOOP-12676
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
>
> In the current implementation of {{SecurityUtil}}, we do not consider the 
> default keytab file of Kerberos (which is {{/etc/krb5.keytab}} in [MIT 
> Kerberos 
> defaults|http://web.mit.edu/kerberos/krb5-1.13/doc/mitK5defaults.html#paths]).
> If the user does not set the keytab file, an {{IOException}} will be thrown. 
> {code:title=SecurityUtil.java|borderStyle=solid}
> 230   public static void login(final Configuration conf,
> 231   final String keytabFileKey, final String userNameKey, String 
> hostname)
> 232   throws IOException { 
> ...
> 237 String keytabFilename = conf.get(keytabFileKey);
> 238 if (keytabFilename == null || keytabFilename.length() == 0) {
> 239   throw new IOException("Running in secure mode, but config doesn't 
> have a keytab");
> 240 }
> {code} 
> However, the default keytab location is assumed by some of the callers. For 
> example, in 
> [{{yarn-default.xml}}|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
> ||property|| default||
> |yarn.resourcemanager.keytab  | /etc/krb5.keytab
> |yarn.nodemanager.keytab| /etc/krb5.keytab
> |yarn.timeline-service.keytab | /etc/krb5.keytab
> On the other hand, these callers directly call the {{SecurityUtil.login}} 
> method; therefore, the docs are incorrect that the defaults are actually 
> {{null}} (as we do not have a default)...
> {code:title=NodeManager.java|borderStyle=solid}
>   protected void doSecureLogin() throws IOException {
> SecurityUtil.login(getConfig(), YarnConfiguration.NM_KEYTAB,
> YarnConfiguration.NM_PRINCIPAL);
>   }
> {code}
> I don't know if we should make {{/etc/krb5.keytab}} as the default in 
> {{SecurityUtil}}, or ask the callers to correct their assumptions. I post 
> here as a minor issue.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-24 Thread Andrew Wang
My 2c is that we should have monotonicity in releases. That way no
"upgrade" is a regression.

On Wed, Dec 23, 2015 at 10:00 PM, Tsuyoshi Ozawa  wrote:

> Hi Vinod,
>
> thank you for the clarification.
>
> >  - Pull these 16 tickets into 2.7.2 and roll a new RC
> > > What do people think? Do folks expect “any fix in 2.6.3 to be there in
> all releases that get out after 2.6.3 release date (December 16th)”?
>
> I personally prefer to pull these tickets into 2.7.2 since it's
> intuitive for me. I can help to cherrypick these tickets into 2.7.2
> once we decide to do so.
>
> This conflicts happened since the the timings of cutting branches and
> actual release are crossed. We can face these situations usually in
> the future since we have 2 or more branches for stable releases.
> Hence, it's a good time to decide basic policy now.
>
> BTW, should we start to discuss on new thread or continue to discuss here?
>
> Thanks,
> - Tsuyoshi
>
> On Thu, Dec 24, 2015 at 9:47 AM, Vinod Kumar Vavilapalli
>  wrote:
> > I retract my -1. I think we will need to discuss this a bit more.
> >
> > Beyond those two tickets, there are a bunch more (totaling to 16) that
> are in 2.6.3 but *not* in 2.7.2. See this:
> https://issues.apache.org/jira/issues/?jql=key%20in%20%28HADOOP-12526%2CHADOOP-12413%2CHADOOP-11267%2CHADOOP-10668%2CHADOOP-10134%2CYARN-4434%2CYARN-4365%2CYARN-4348%2CYARN-4344%2CYARN-4326%2CYARN-4241%2CYARN-2859%2CMAPREDUCE-6549%2CMAPREDUCE-6540%2CMAPREDUCE-6377%2CMAPREDUCE-5883%2CHDFS-9431%2CHDFS-9289%2CHDFS-8615%29%20and%20fixVersion%20!%3D%202.7.0
> <
> https://issues.apache.org/jira/issues/?jql=key%20in%20(HADOOP-12526,HADOOP-12413,HADOOP-11267,HADOOP-10668,HADOOP-10134,YARN-4434,YARN-4365,YARN-4348,YARN-4344,YARN-4326,YARN-4241,YARN-2859,MAPREDUCE-6549,MAPREDUCE-6540,MAPREDUCE-6377,MAPREDUCE-5883,HDFS-9431,HDFS-9289,HDFS-8615)%20and%20fixVersion%20!=%202.7.0
> >
> >
> > Two options here, depending on the importance of ‘causality' between
> 2.6.x and 2.7.x lines.
> >  - Ship 2.7.2 as we voted on here
> >  - Pull these 16 tickets into 2.7.2 and roll a new RC
> >
> > What do people think? Do folks expect “any fix in 2.6.3 to be there in
> all releases that get out after 2.6.3 release date (December 16th)”?
> >
> > Thanks
> > +Vinod
> >
> >> On Dec 23, 2015, at 12:37 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
> >>
> >> Sigh. Missed this.
> >>
> >> To retain causality ("any fix in 2.6.3 will be there in all releases
> that got out after 2.6.3”), I’ll get these patches in.
> >>
> >> Reverting my +1, and casting -1 for the RC myself.
> >>
> >> Will spin a new RC, this voting thread is marked dead.
> >>
> >> Thanks
> >> +Vinod
> >>
> >>> On Dec 22, 2015, at 8:24 AM, Junping Du  j...@hortonworks.com>> wrote:
> >>>
> >>> However, when I look at our commit log and CHANGES.txt, I found
> something we are missing:
> >>> 1. HDFS-9470 and YARN-4424 are missing from the 2.7.2 branch and RC1
> tag.
> >>> 2. HADOOP-5323, HDFS-8767 are missing in CHANGE.txt
> >>
> >
>


Jenkins build is back to normal : Hadoop-Common-trunk #2152

2015-12-24 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #856

2015-12-24 Thread Apache Jenkins Server
See 

Changes:

[ozawa] YARN-4234. addendum patch to remove unnecessary file. Contributed by

--
[...truncated 5818 lines...]
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.368 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.351 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.808 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.257 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.154 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.028 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.209 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.828 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.988 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparator2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.245 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.997 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.479 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestVLong
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.475 sec - in 
org.apache.hadoop.io.file.tfile.TestVLong
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestArrayWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in 
org.apache.hadoop.io.TestArrayWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Runn

[jira] [Resolved] (HADOOP-12679) while performing the simple queries on s3 table through hive ,got exception:-Error: java.io.IOException: java.lang.reflect.InvocationTargetException

2015-12-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12679.
-
Resolution: Invalid

{{: Class org.apache.hadoop.fs.s3native.NativeS3FileSystem not found}}.

means your classpath lacks hadoop-aws and/or jets3t jar. Not a hadoop bug, 
something with your configuration of hive.

closing as invalid. Sorry.

FFI, see: http://wiki.apache.org/hadoop/InvalidJiraIssues

> while performing the simple queries on s3 table through hive ,got 
> exception:-Error: java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> 
>
> Key: HADOOP-12679
> URL: https://issues.apache.org/jira/browse/HADOOP-12679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
> Environment: Hadoop 2.7.1
> Hive 1.2.1
>Reporter: Rajanmbx
>  Labels: test
> Fix For: 2.7.1
>
>
> Importing s3 tables through Hive like creating the external table and s3 
> server location.
> table is created and data also showing but while i querie on that table like 
> count ,where id=>'10'
> it is hitting me with a exception as follows 
> Diagnostic Messages for this Task:
> Error: java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:266)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:333)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:719)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:252)
>   ... 11 more
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> Class org.apache.hadoop.fs.s3native.NativeS3FileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:107)
>   at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:67)
>   ... 16 more
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
>   ... 26 more
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapR