Build failed in Jenkins: Hadoop-Common-0.23-Build #1108
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1108/ -- [...truncated 8263 lines...] Running org.apache.hadoop.net.TestDNS Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec Running org.apache.hadoop.net.TestNetUtils Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.487 sec Running org.apache.hadoop.net.TestStaticMapping Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.928 sec Running org.apache.hadoop.net.TestSwitchMapping Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.317 sec Running org.apache.hadoop.net.TestSocketIOWithTimeout Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.189 sec Running org.apache.hadoop.ipc.TestRPCCompatibility Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.845 sec Running org.apache.hadoop.ipc.TestAvroRpc Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.342 sec Running org.apache.hadoop.ipc.TestServer Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.371 sec Running org.apache.hadoop.ipc.TestIPC Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.349 sec Running org.apache.hadoop.ipc.TestMiniRPCBenchmark Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.818 sec Running org.apache.hadoop.ipc.TestSaslRPC Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.724 sec Running org.apache.hadoop.ipc.TestRPC Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.991 sec Running org.apache.hadoop.ipc.TestSocketFactory Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec Running org.apache.hadoop.ipc.TestIPCServerResponder Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.505 sec Running org.apache.hadoop.jmx.TestJMXJsonServlet Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.836 sec Running org.apache.hadoop.record.TestRecordVersioning Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec Running org.apache.hadoop.record.TestBuffer Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec Running org.apache.hadoop.record.TestRecordIO Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec Running org.apache.hadoop.cli.TestCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.94 sec Running org.apache.hadoop.log.TestLog4Json Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec Running org.apache.hadoop.log.TestLogLevel Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.584 sec Running org.apache.hadoop.http.TestHttpServer Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.445 sec Running org.apache.hadoop.http.TestHtmlQuoting Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec Running org.apache.hadoop.http.TestHttpRequestLogAppender Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec Running org.apache.hadoop.http.TestGlobalFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.767 sec Running org.apache.hadoop.http.TestHttpServerWebapps Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec Running org.apache.hadoop.http.lib.TestStaticUserWebFilter Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.511 sec Running org.apache.hadoop.http.TestServletFilter Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.024 sec Running org.apache.hadoop.http.TestPathFilter Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.752 sec Running org.apache.hadoop.http.TestHttpRequestLog Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.154 sec Running org.apache.hadoop.http.TestHttpServerLifecycle Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.189 sec Running org.apache.hadoop.conf.TestReconfiguration Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.977 sec Running org.apache.hadoop.conf.TestConfServlet Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec Running org.apache.hadoop.conf.TestDeprecatedKeys Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec Running org.apache.hadoop.conf.TestConfiguration Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.079 sec Running org.apache.hadoop.conf.TestConfigurationDeprecation Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.437 sec Running org.apache.hadoop.conf.TestGetInstances Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.247 sec Running org.apache.hadoop.conf.TestConfigurationSubclass Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.301 sec Running org.apache.hadoop.fs.kfs.TestKosmosFileSystem Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.575 sec Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract Tests run: 29, Failures: 0,
Hadoop Distributed Caching Technology
Hi, Is there a way to use some other cache technology instead of memcache which is out of box in Hadoop Map-Reduce jobs. Cheers, Deb
Re: Hadoop Distributed Caching Technology
Hi Deb, We added support for centralized cache management to HDFS, which is out of the box and works with MR jobs. See the docs here: http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html Best, Andrew On Mon, Oct 20, 2014 at 8:07 AM, Maity, Debashish debashish.ma...@softwareag.com wrote: Hi, Is there a way to use some other cache technology instead of memcache which is out of box in Hadoop Map-Reduce jobs. Cheers, Deb
RE: Hadoop Distributed Caching Technology
So how I am going to change to other cache technology like Hazelcast or ehcache etc. -Original Message- From: Andrew Wang [mailto:andrew.w...@cloudera.com] Sent: Monday, October 20, 2014 8:52 PM To: common-dev@hadoop.apache.org Subject: Re: Hadoop Distributed Caching Technology Hi Deb, We added support for centralized cache management to HDFS, which is out of the box and works with MR jobs. See the docs here: http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html Best, Andrew On Mon, Oct 20, 2014 at 8:07 AM, Maity, Debashish debashish.ma...@softwareag.com wrote: Hi, Is there a way to use some other cache technology instead of memcache which is out of box in Hadoop Map-Reduce jobs. Cheers, Deb
Re: Hadoop Distributed Caching Technology
Hi Debashish, Do you mind describing in a little more detail the benefits you're interested in with different caching technology? The Hadoop distributed cache, HDFS caching, memcached, and Hazelcast are all related to caching, but aim to solve very different problems. -Sandy On Mon, Oct 20, 2014 at 12:01 PM, Maity, Debashish debashish.ma...@softwareag.com wrote: So how I am going to change to other cache technology like Hazelcast or ehcache etc. -Original Message- From: Andrew Wang [mailto:andrew.w...@cloudera.com] Sent: Monday, October 20, 2014 8:52 PM To: common-dev@hadoop.apache.org Subject: Re: Hadoop Distributed Caching Technology Hi Deb, We added support for centralized cache management to HDFS, which is out of the box and works with MR jobs. See the docs here: http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html Best, Andrew On Mon, Oct 20, 2014 at 8:07 AM, Maity, Debashish debashish.ma...@softwareag.com wrote: Hi, Is there a way to use some other cache technology instead of memcache which is out of box in Hadoop Map-Reduce jobs. Cheers, Deb
[jira] [Created] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly
Steve Loughran created HADOOP-11212: --- Summary: NetUtils.wrapException to handle SocketException explicitly Key: HADOOP-11212 URL: https://issues.apache.org/jira/browse/HADOOP-11212 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 3.0.0 Reporter: Steve Loughran the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so it is wrapped with an IOE —this loses information, and stops any extra diags /wiki links being added -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11213) Fix typos in html pages: SecureMode and EncryptedShuffle
Wei Yan created HADOOP-11213: Summary: Fix typos in html pages: SecureMode and EncryptedShuffle Key: HADOOP-11213 URL: https://issues.apache.org/jira/browse/HADOOP-11213 Project: Hadoop Common Issue Type: Bug Reporter: Wei Yan Assignee: Wei Yan Priority: Minor In SecureMode.html, {noformat} banned.users| hfds,yarn,mapred,bin {noformat} Here hfds should be hdfs. In EncryptedShuffle.html, {noformat} hadoop.ssl.server.conf | ss-server.xml hadoop.ssl.client.conf | ss-client.xml {noformat} Here the two xml files should be ssl-*. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11214) Add web UI for NFS gateway
Brandon Li created HADOOP-11214: --- Summary: Add web UI for NFS gateway Key: HADOOP-11214 URL: https://issues.apache.org/jira/browse/HADOOP-11214 Project: Hadoop Common Issue Type: Bug Components: nfs Reporter: Brandon Li This JIRA is to track the effort to add web UI for NFS gateway to show some metrics and configuration related information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-10904) Provide Alt to Clear Text Passwords through Cred Provider API
[ https://issues.apache.org/jira/browse/HADOOP-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay resolved HADOOP-10904. -- Resolution: Fixed Provide Alt to Clear Text Passwords through Cred Provider API - Key: HADOOP-10904 URL: https://issues.apache.org/jira/browse/HADOOP-10904 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Larry McCay Assignee: Larry McCay This is an umbrella jira to track various child tasks to uptake the credential provider API to enable deployments without storing passwords/credentials in clear text. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11215) DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator
Zhijie Shen created HADOOP-11215: Summary: DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator Key: HADOOP-11215 URL: https://issues.apache.org/jira/browse/HADOOP-11215 Project: Hadoop Common Issue Type: Bug Reporter: Zhijie Shen Here's the code in get/renew/cancel DT: {code} return ((KerberosDelegationTokenAuthenticator) getAuthenticator()). renewDelegationToken(url, token, token.delegationToken, doAsUser); {code} It seems not to be right because PseudoDelegationTokenAuthenticator should work here as well. At least, it is inconsistent in the context of delegation token authentication, as DelegationTokenAuthenticationHandler doesn't require the authentication must be Kerberos. -- This message was sent by Atlassian JIRA (v6.3.4#6332)