Build failed in Jenkins: Hadoop-Common-trunk #2147

2015-12-23 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4234. New put APIs in TimelineClient for ats v1.5. 
Contributed by

--
[...truncated 3875 lines...]
Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 9 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.171 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.993 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #851

2015-12-23 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4234. New put APIs in TimelineClient for ats v1.5. 
Contributed by

--
[...truncated 9647 lines...]
[WARNING] 
:54:
 warning: no @return
[WARNING] public static XDR writeMountList(XDR xdr, int xid, List 
mounts) {
[WARNING] ^
[WARNING] 
:66:
 warning: no @param for xdr
[WARNING] public static XDR writeExportList(XDR xdr, int xid, List 
exports,
[WARNING] ^
[WARNING] 
:66:
 warning: no @param for xid
[WARNING] public static XDR writeExportList(XDR xdr, int xid, List 
exports,
[WARNING] ^
[WARNING] 
:66:
 warning: no @param for exports
[WARNING] public static XDR writeExportList(XDR xdr, int xid, List 
exports,
[WARNING] ^
[WARNING] 
:66:
 warning: no @param for hostMatcher
[WARNING] public static XDR writeExportList(XDR xdr, int xid, List 
exports,
[WARNING] ^
[WARNING] 
:66:
 warning: no @return
[WARNING] public static XDR writeExportList(XDR xdr, int xid, List 
exports,
[WARNING] ^
[WARNING] 
:176:
 warning: no @return
[WARNING] public String[] getHostGroupList() {
[WARNING] ^
[WARNING] 
:58:
 warning: no @return
[WARNING] public long getMilliSeconds() {
[WARNING] ^
[WARNING] 
:44:
 warning: no @param for xdr
[WARNING] public abstract void serialize(XDR xdr);
[WARNING] ^
[WARNING] 
:49:
 warning: no @param for v
[WARNING] public FileHandle(long v) {
[WARNING] ^
[WARNING] 
:80:
 warning: no @param for value
[WARNING] public static NFSPROC3 fromValue(int value) {
[WARNING] ^
[WARNING] 
:30:
 warning: no @return
[WARNING] public NFS3Response nullProcedure();
[WARNING] ^
[WARNING] 
:33:
 warning: no @param for xdr
[WARNING] public NFS3Response getattr(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:33:
 warning: no @param for info
[WARNING] public NFS3Response getattr(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:33:
 warning: no @return
[WARNING] public NFS3Response getattr(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:36:
 warning: no @param for xdr
[WARNING] public NFS3Response setattr(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:36:
 warning: no @param for info
[WARNING] public NFS3Response setattr(XDR xdr, RpcInfo info);
[WARNING] ^
[WARNING] 
:36:
 warning: no @return
[WARNING] public 

[jira] [Created] (HADOOP-12674) BootstrapStandby - Inconsistent Logging

2015-12-23 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-12674:


 Summary: BootstrapStandby - Inconsistent Logging
 Key: HADOOP-12674
 URL: https://issues.apache.org/jira/browse/HADOOP-12674
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.7.1
Reporter: BELUGA BEHR
Priority: Minor


{code}
/* Line 379 */
  if (LOG.isDebugEnabled()) {
LOG.debug(msg, e);
  } else {
LOG.fatal(msg);
  }
{code}

Why would message, considered "fatal" under most operating circumstances be 
considered "debug" when debugging is on.  This is confusing to say the least.  
If there is a problem and the user attempts to debug the situation, they may be 
filtering on "fatal" messages and miss the exception.

Please consider using only the fatal logging, and including the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2148

2015-12-23 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4400. AsyncDispatcher.waitForDrained should be final. 
Contributed

--
[...truncated 5409 lines...]
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.283 sec - in 
org.apache.hadoop.ipc.TestFairCallQueue
Running org.apache.hadoop.ipc.TestRetryCache
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.39 sec - in 
org.apache.hadoop.ipc.TestRetryCache
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.001 sec - in 
org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRPCWaitForProxy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.257 sec - in 
org.apache.hadoop.ipc.TestRPCWaitForProxy
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.054 sec - in 
org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.116 sec - in 
org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.471 sec - in 
org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.447 sec - in 
org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.657 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.418 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.593 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.423 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.165 sec - in 
org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.748 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.997 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.tracing.TestTraceUtils
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.542 sec - in 
org.apache.hadoop.tracing.TestTraceUtils
Running org.apache.hadoop.test.TestGenericTestUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
org.apache.hadoop.test.TestGenericTestUtils
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.221 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.test.TestJUnitSetup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.183 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.777 sec - in 
org.apache.hadoop.metrics2.util.TestMetricsCache
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec - in 
org.apache.hadoop.metrics2.util.TestSampleStat
Running org.apache.hadoop.metrics2.util.TestSampleQuantiles
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.679 sec - in 
org.apache.hadoop.metrics2.util.TestSampleQuantiles
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.502 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.4 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.metrics2.lib.TestUniqNames
Running 

Jenkins build is back to normal : Hadoop-common-trunk-Java8 #852

2015-12-23 Thread Apache Jenkins Server
See 



Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-23 Thread Vinod Kumar Vavilapalli
Sigh. Missed this.

To retain causality ("any fix in 2.6.3 will be there in all releases that got 
out after 2.6.3”), I’ll get these patches in.

Reverting my +1, and casting -1 for the RC myself.

Will spin a new RC, this voting thread is marked dead.

Thanks
+Vinod

> On Dec 22, 2015, at 8:24 AM, Junping Du  wrote:
> 
> However, when I look at our commit log and CHANGES.txt, I found something we 
> are missing:
> 1. HDFS-9470 and YARN-4424 are missing from the 2.7.2 branch and RC1 tag.
> 2. HADOOP-5323, HDFS-8767 are missing in CHANGE.txt



[jira] [Created] (HADOOP-12677) DecompressorStream throws IndexOutOfBoundsException when calling skip(long)

2015-12-23 Thread Laurent Goujon (JIRA)
Laurent Goujon created HADOOP-12677:
---

 Summary: DecompressorStream throws IndexOutOfBoundsException when 
calling skip(long)
 Key: HADOOP-12677
 URL: https://issues.apache.org/jira/browse/HADOOP-12677
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Laurent Goujon


DecompressorStream.skip(long) throws an IndexOutOfBoundException when using a 
long bigger than Integer.MAX_VALUE

This is because of this cast from long to int: 
https://github.com/apache/hadoop-common/blob/HADOOP-3628/src/core/org/apache/hadoop/io/compress/DecompressorStream.java#L125

The fix is probably to do the cast after applying Math.min: in that case, it 
should not be an issue since it should not be bigger than the buffer size (512)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Common-trunk #2149

2015-12-23 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12676) Consider the default keytab file of Kerberos

2015-12-23 Thread Tianyin Xu (JIRA)
Tianyin Xu created HADOOP-12676:
---

 Summary: Consider the default keytab file of Kerberos
 Key: HADOOP-12676
 URL: https://issues.apache.org/jira/browse/HADOOP-12676
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.2, 2.7.1
Reporter: Tianyin Xu
Priority: Minor


In the current implementation of {{SecurityUtil}}, we do not consider the 
default keytab file of Kerberos (which is {{/etc/krb5.keytab}} in [MIT Kerberos 
defaults|http://web.mit.edu/kerberos/krb5-1.13/doc/mitK5defaults.html#paths]).

If the user does not set the keytab file, an {{IOException}} will be thrown. 
{code:title=SecurityUtil.java|borderStyle=solid}
230   public static void login(final Configuration conf,
231   final String keytabFileKey, final String userNameKey, String hostname)
232   throws IOException { 
...
237 String keytabFilename = conf.get(keytabFileKey);
238 if (keytabFilename == null || keytabFilename.length() == 0) {
239   throw new IOException("Running in secure mode, but config doesn't 
have a keytab");
240 }
{code} 

However, the default keytab location is assumed by some of the callers. For 
example, in 
[{{yarn-default.xml}}|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
 the defaults of {{yarn.resourcemanager.keytab}}, {{yarn.nodemanager.keytab}}, 
and {{yarn.timeline-service.keytab}} all point to {{/etc/krb5.keytab}}. 

On the other hand, these callers directly call the {{SecurityUtil.login}} 
method; therefore, the docs are incorrect that the defaults are actually 
{{null}} (as we do not have a default)...
{code:title=NodeManager.java|borderStyle=solid}
  protected void doSecureLogin() throws IOException {
SecurityUtil.login(getConfig(), YarnConfiguration.NM_KEYTAB,
YarnConfiguration.NM_PRINCIPAL);
  }
{code}

I don't know if we should make {{/etc/krb5.keytab}} as the default in 
{{SecurityUtil}}, or ask the callers to correct their assumptions. I post here 
as a potential improvement.

Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2015-12-23 Thread madhumita chakraborty (JIRA)
madhumita chakraborty created HADOOP-12678:
--

 Summary: Handle empty rename pending metadata file during atomic 
rename in redo path
 Key: HADOOP-12678
 URL: https://issues.apache.org/jira/browse/HADOOP-12678
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: madhumita chakraborty
Assignee: madhumita chakraborty
Priority: Critical


Handle empty rename pending metadata file during atomic rename in redo path
During atomic rename we create metadata file for rename(-renamePending.json). 
We create that in 2 steps
1. We create an empty blob corresponding to the .json file in its real location
2. We create a scratch file to which we write the contents of the rename 
pending which is then copied over into the blob described in 1
If process crash occurs after step 1 and before step 2 is complete - we will be 
left with a zero size blob corresponding to the pending rename metadata file.
This kind of scenario can happen in the /hbase/.tmp folder because it is 
considered a candidate folder for atomic rename. Now when HMaster starts up it 
executes listStatus on the .tmp folder to clean up pending data. At this stage 
due to the lazy pending rename complete process we look for these json files. 
On seeing an empty file the process simply throws a fatal exception assuming 
something went wrong.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12675) Fix description about retention period in usage of expunge command

2015-12-23 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12675:
-

 Summary: Fix description about retention period in usage of 
expunge command
 Key: HADOOP-12675
 URL: https://issues.apache.org/jira/browse/HADOOP-12675
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.8.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12679) while performing the simple queries on s3 table through hive ,got exception:-Error: java.io.IOException: java.lang.reflect.InvocationTargetException

2015-12-23 Thread Rajanmbx (JIRA)
Rajanmbx created HADOOP-12679:
-

 Summary: while performing the simple queries on s3 table through 
hive ,got exception:-Error: java.io.IOException: 
java.lang.reflect.InvocationTargetException
 Key: HADOOP-12679
 URL: https://issues.apache.org/jira/browse/HADOOP-12679
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.1
 Environment: Hadoop 2.7.1
Hive 1.2.1
Reporter: Rajanmbx
 Fix For: 2.7.1



Importing s3 tables through Hive like creating the external table and s3 server 
location.
table is created and data also showing but while i querie on that table like 
count ,where id=>'10'
it is hitting me with a exception as follows 

Diagnostic Messages for this Task:
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:266)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:333)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:719)
at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:252)
... 11 more
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.s3native.NativeS3FileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at 
org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:107)
at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:67)
... 16 more
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.s3native.NativeS3FileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
... 26 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-23 Thread Tsuyoshi Ozawa
Hi Vinod,

thank you for the clarification.

>  - Pull these 16 tickets into 2.7.2 and roll a new RC
> > What do people think? Do folks expect “any fix in 2.6.3 to be there in all 
> > releases that get out after 2.6.3 release date (December 16th)”?

I personally prefer to pull these tickets into 2.7.2 since it's
intuitive for me. I can help to cherrypick these tickets into 2.7.2
once we decide to do so.

This conflicts happened since the the timings of cutting branches and
actual release are crossed. We can face these situations usually in
the future since we have 2 or more branches for stable releases.
Hence, it's a good time to decide basic policy now.

BTW, should we start to discuss on new thread or continue to discuss here?

Thanks,
- Tsuyoshi

On Thu, Dec 24, 2015 at 9:47 AM, Vinod Kumar Vavilapalli
 wrote:
> I retract my -1. I think we will need to discuss this a bit more.
>
> Beyond those two tickets, there are a bunch more (totaling to 16) that are in 
> 2.6.3 but *not* in 2.7.2. See this: 
> https://issues.apache.org/jira/issues/?jql=key%20in%20%28HADOOP-12526%2CHADOOP-12413%2CHADOOP-11267%2CHADOOP-10668%2CHADOOP-10134%2CYARN-4434%2CYARN-4365%2CYARN-4348%2CYARN-4344%2CYARN-4326%2CYARN-4241%2CYARN-2859%2CMAPREDUCE-6549%2CMAPREDUCE-6540%2CMAPREDUCE-6377%2CMAPREDUCE-5883%2CHDFS-9431%2CHDFS-9289%2CHDFS-8615%29%20and%20fixVersion%20!%3D%202.7.0
>  
> 
>
> Two options here, depending on the importance of ‘causality' between 2.6.x 
> and 2.7.x lines.
>  - Ship 2.7.2 as we voted on here
>  - Pull these 16 tickets into 2.7.2 and roll a new RC
>
> What do people think? Do folks expect “any fix in 2.6.3 to be there in all 
> releases that get out after 2.6.3 release date (December 16th)”?
>
> Thanks
> +Vinod
>
>> On Dec 23, 2015, at 12:37 PM, Vinod Kumar Vavilapalli  
>> wrote:
>>
>> Sigh. Missed this.
>>
>> To retain causality ("any fix in 2.6.3 will be there in all releases that 
>> got out after 2.6.3”), I’ll get these patches in.
>>
>> Reverting my +1, and casting -1 for the RC myself.
>>
>> Will spin a new RC, this voting thread is marked dead.
>>
>> Thanks
>> +Vinod
>>
>>> On Dec 22, 2015, at 8:24 AM, Junping Du >> > wrote:
>>>
>>> However, when I look at our commit log and CHANGES.txt, I found something 
>>> we are missing:
>>> 1. HDFS-9470 and YARN-4424 are missing from the 2.7.2 branch and RC1 tag.
>>> 2. HADOOP-5323, HDFS-8767 are missing in CHANGE.txt
>>
>


Build failed in Jenkins: Hadoop-Common-trunk #2151

2015-12-23 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] MAPREDUCE-6419. JobHistoryServer doesn't sort properly based 
on Job ID

--
[...truncated 3875 lines...]
Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 9 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.716 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.74 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: