Build failed in Jenkins: Hadoop-Common-trunk #1025

2014-01-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1025/changes

Changes:

[arp] HDFS-5844. Fix broken link in WebHDFS.apt.vm (Contributed by Akira 
Ajisaka)

[arp] HADOOP-10291. TestSecurityUtil#testSocketAddrWithIP fails due to test 
order dependency. (Contributed by Mit Desai)

[sandy] YARN-1630. Introduce timeout for async polling operations in 
YarnClientImpl (Aditya Acharya via Sandy Ryza)

[sandy] MAPREDUCE-5464. Add analogs of the SLOTS_MILLIS counters that jive with 
the YARN resource model (Sandy Ryza)

--
[...truncated 60674 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: build.platform - Linux-i386-32
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/main/java
Setting project property: project.build.testSourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/test/java
Setting project property: localRepository -id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

Re: Re-swizzle 2.3

2014-01-29 Thread Arun C Murthy
Mostly ready for a jira perspective.

Committers - Henceforth, please use extreme caution while committing to 
branch-2.3. Please commit *only* blockers to 2.3.

thanks,
Arun

On Jan 28, 2014, at 3:30 PM, Arun C Murthy a...@hortonworks.com wrote:

 Fixing up stuff now, thanks to Andrew for volunteering to help with 
 Common/HDFS.
 
 Arun
 
 On Jan 28, 2014, at 10:54 AM, Arun C Murthy a...@hortonworks.com wrote:
 
 Sorry, missed this. Go ahead, I'll fix things up at the back end. Thanks.
 
 On Jan 28, 2014, at 12:11 AM, Sandy Ryza sandy.r...@cloudera.com wrote:
 
 Going forward with commits because it seems like others have been doing so
 
 
 On Mon, Jan 27, 2014 at 1:31 PM, Sandy Ryza sandy.r...@cloudera.com wrote:
 
 We should hold off commits until that's done, right?
 
 
 On Mon, Jan 27, 2014 at 1:07 PM, Arun C Murthy a...@hortonworks.comwrote:
 
 Yep, on it as we speak. :)
 
 
 Arun
 
 On Jan 27, 2014, at 12:36 PM, Jason Lowe jl...@yahoo-inc.com wrote:
 
 Thanks, Arun.  Are there plans to update the Fix Versions and
 CHANGES.txt accordingly?  There are a lot of JIRAs that are now going to
 ship in 2.3.0 but the JIRA and CHANGES.txt says they're not fixed until
 2.4.0.
 
 Jason
 
 On 01/27/2014 08:47 AM, Arun C Murthy wrote:
 Done. I've re-created branch-2.3 from branch-2.
 
 thanks,
 Arun
 
 On Jan 23, 2014, at 2:40 AM, Arun Murthy a...@hortonworks.com wrote:
 
 Based on the discussion at common-dev@, we've decided to target 2.3
 off the tip of branch-2 based on the 2 major HDFS features which are
 Heterogenous Storage (HDFS-2832) and HDFS Cache (HDFS-4949).
 
 I'll create a new branch-2.3 on (1/24) at 6pm PST.
 
 thanks,
 Arun
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.
 
 
 
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


regression in 2.4? YARN severs on secure cluster startup

2014-01-29 Thread Steve Loughran
I'm just switching over to use the 2.4-SNAPSHOT in a secured pseudo-dist
cluster, and now the services are failing to come up because the web
principals haven't been defined. Example


2014-01-29 15:42:58,558 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2014-01-29 15:42:58,559 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-01-29 15:42:58,559 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-01-29 15:42:58,630 ERROR org.apache.hadoop.http.HttpServer2: *WebHDFS
and security are enabled, but configuration proper*
*ty 'dfs.web.authentication.kerberos.principal' is not set.*
2014-01-29 15:42:58,630 INFO org.apache.hadoop.http.HttpServer2: Added
filter 'SPNEGO' (class=org.apache.hadoop.hdfs.web.Aut
hFilter)
2014-01-29 15:42:58,631 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage: packageName=org.apache.hadoop.hdf
s.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2014-01-29 15:42:58,658 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to getDelegationToken
2014-01-29 15:42:58,662 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to renewDelegationToken
2014-01-29 15:42:58,663 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to cancelDelegationToken
2014-01-29 15:42:58,663 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to fsck
2014-01-29 15:42:58,671 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to getimage
2014-01-29 15:42:58,748 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50070
2014-01-29 15:42:58,748 INFO org.mortbay.log: jetty-6.1.26
2014-01-29 15:42:58,941 INFO
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Login using key
tab /home/stevel/conf/hdfs.keytab, for principal HTTP/ubuntu@COTHAM
2014-01-29 15:42:58,981 INFO
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Initialized, pr
incipal [HTTP/ubuntu@COTHAM] from keytab [/home/stevel/conf/hdfs.keytab]
2014-01-29 15:42:58,981 WARN
org.apache.hadoop.security.authentication.server.AuthenticationFilter:
'signature.secret' confi
guration not set, using a random value as secret
2014-01-29 15:42:58,982 WARN org.mortbay.log: failed SPNEGO:
javax.servlet.ServletException: javax.servlet.ServletException:
 Principal not defined in configuration
2014-01-29 15:42:58,982 WARN org.mortbay.log: Failed startup of context
org.mortbay.jetty.webapp.WebAppContext@167a465{/,fil
e:/home/stevel/hadoop/share/hadoop/hdfs/webapps/hdfs}
javax.servlet.ServletException: javax.servlet.ServletException: Principal
not defined in configuration
at
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler
.java:203)
:

YARN is the same but without the text telling me what config option I have
to set (i.e no equivalent of https://issues.apache.org/jira/browse/HDFS-3813)

-29 16:04:33,908 INFO
org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor
thread interrupted
2014-01-29 16:04:33,908 INFO
org.apache.hadoop.yarn.util.AbstractLivelinessMonitor:
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer
thread interrupted
2014-01-29 16:04:33,908 ERROR
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
InterruptedExcpetion recieved for ExpiredTokenRemover thread
java.lang.InterruptedException: sleep interrupted
2014-01-29 16:04:33,909 INFO
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned
to standby state
2014-01-29 16:04:33,909 FATAL
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error
starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:250)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:775)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:866)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:995)
Caused by: java.io.IOException: Unable to initialize WebAppContext
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:809)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:245)
... 4 more
Caused by: javax.servlet.ServletException: javax.servlet.ServletException:
Principal not defined in configuration
at

Re: regression in 2.4? YARN severs on secure cluster startup

2014-01-29 Thread Jason Lowe
The RM issue should be fixed by YARN-1600 which I just committed this 
morning.


Jason

On 01/29/2014 10:33 AM, Steve Loughran wrote:

I'm just switching over to use the 2.4-SNAPSHOT in a secured pseudo-dist
cluster, and now the services are failing to come up because the web
principals haven't been defined. Example


2014-01-29 15:42:58,558 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2014-01-29 15:42:58,559 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-01-29 15:42:58,559 INFO org.apache.hadoop.http.HttpServer2: Added
filter static_user_filter (class=org.apache.hadoop.ht
tp.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-01-29 15:42:58,630 ERROR org.apache.hadoop.http.HttpServer2: *WebHDFS
and security are enabled, but configuration proper*
*ty 'dfs.web.authentication.kerberos.principal' is not set.*
2014-01-29 15:42:58,630 INFO org.apache.hadoop.http.HttpServer2: Added
filter 'SPNEGO' (class=org.apache.hadoop.hdfs.web.Aut
hFilter)
2014-01-29 15:42:58,631 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage: packageName=org.apache.hadoop.hdf
s.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2014-01-29 15:42:58,658 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to getDelegationToken
2014-01-29 15:42:58,662 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to renewDelegationToken
2014-01-29 15:42:58,663 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to cancelDelegationToken
2014-01-29 15:42:58,663 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to fsck
2014-01-29 15:42:58,671 INFO org.apache.hadoop.http.HttpServer2: Adding
Kerberos (SPNEGO) filter to getimage
2014-01-29 15:42:58,748 INFO org.apache.hadoop.http.HttpServer2: Jetty
bound to port 50070
2014-01-29 15:42:58,748 INFO org.mortbay.log: jetty-6.1.26
2014-01-29 15:42:58,941 INFO
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Login using key
tab /home/stevel/conf/hdfs.keytab, for principal HTTP/ubuntu@COTHAM
2014-01-29 15:42:58,981 INFO
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Initialized, pr
incipal [HTTP/ubuntu@COTHAM] from keytab [/home/stevel/conf/hdfs.keytab]
2014-01-29 15:42:58,981 WARN
org.apache.hadoop.security.authentication.server.AuthenticationFilter:
'signature.secret' confi
guration not set, using a random value as secret
2014-01-29 15:42:58,982 WARN org.mortbay.log: failed SPNEGO:
javax.servlet.ServletException: javax.servlet.ServletException:
  Principal not defined in configuration
2014-01-29 15:42:58,982 WARN org.mortbay.log: Failed startup of context
org.mortbay.jetty.webapp.WebAppContext@167a465{/,fil
e:/home/stevel/hadoop/share/hadoop/hdfs/webapps/hdfs}
javax.servlet.ServletException: javax.servlet.ServletException: Principal
not defined in configuration
 at
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler
.java:203)
:

YARN is the same but without the text telling me what config option I have
to set (i.e no equivalent of https://issues.apache.org/jira/browse/HDFS-3813)

-29 16:04:33,908 INFO
org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor
thread interrupted
2014-01-29 16:04:33,908 INFO
org.apache.hadoop.yarn.util.AbstractLivelinessMonitor:
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer
thread interrupted
2014-01-29 16:04:33,908 ERROR
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
InterruptedExcpetion recieved for ExpiredTokenRemover thread
java.lang.InterruptedException: sleep interrupted
2014-01-29 16:04:33,909 INFO
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned
to standby state
2014-01-29 16:04:33,909 FATAL
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error
starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:250)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:775)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:866)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:995)
Caused by: java.io.IOException: Unable to initialize WebAppContext
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:809)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:245)
... 4 more
Caused by: 

Re: regression in 2.4? YARN severs on secure cluster startup

2014-01-29 Thread Steve Loughran
On 29 January 2014 17:13, Jason Lowe jl...@yahoo-inc.com wrote:

 The RM issue should be fixed by YARN-1600 which I just committed this
 morning.


if that's morning US time then hopefully yes. my build dates from about
13:00 GMT

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-10307) Support multiple Authentication mechanisms for HTTP

2014-01-29 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-10307:
-

 Summary: Support multiple Authentication mechanisms for HTTP
 Key: HADOOP-10307
 URL: https://issues.apache.org/jira/browse/HADOOP-10307
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony


Currently it is possible to specify custom Authentication Handlers  for HTTP 
authentication.  
We have a requirement to support multiple mechanisms  to authenticate HTTP 
access.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10308) Remove from core-defaulit.xml unsupported 'classic' and add 'yarn-tez' as value for mapreduce.framework.name property

2014-01-29 Thread Eric Charles (JIRA)
Eric Charles created HADOOP-10308:
-

 Summary: Remove from core-defaulit.xml unsupported 'classic' and 
add 'yarn-tez' as value for mapreduce.framework.name property
 Key: HADOOP-10308
 URL: https://issues.apache.org/jira/browse/HADOOP-10308
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eric Charles


Classic mr-v1 is no more supported in trunk.
On the other hand, we will soon have yarn-tez implementation of mapreduce (tez 
layer allowing to have a single AM for all map-reduce jobs).

core-default.xml must reflect this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HADOOP-10112) har file listing doesn't work with wild card

2014-01-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-10112:
--


 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.3.0

 Attachments: HADOOP-10112.004.patch


 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10112) har file listing doesn't work with wild card

2014-01-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-10112.
--

Resolution: Invalid

 har file listing  doesn't work with wild card
 -

 Key: HADOOP-10112
 URL: https://issues.apache.org/jira/browse/HADOOP-10112
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.3.0

 Attachments: HADOOP-10112.004.patch


 [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
 -ls: Can not create a Path from an empty string
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 It works without *.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10233) RPC lacks output flow control

2014-01-29 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-10233.
-

Resolution: Duplicate

 RPC lacks output flow control
 -

 Key: HADOOP-10233
 URL: https://issues.apache.org/jira/browse/HADOOP-10233
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Priority: Critical

 The RPC layer has input flow control via the callq, however it lacks any 
 output flow control.  A handler will try to directly send the response.  If 
 the full response is not sent then it is queued for the background responder 
 thread.  The RPC layer may end up queuing so many buffers that it locks up 
 in GC.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Re-swizzle 2.3

2014-01-29 Thread Jason Lowe
I noticed that somehow the target version field in JIRA was invisibly 
cleared on most of the Blocker/Critical JIRAs that were originally 
targeted for 2.3.0/2.4.0.  I happened to have an old browser tab lying 
around from an earlier query for these and I tried to fix them up, 
marking some for 2.4.0 that IMHO weren't show-stoppers for the 2.3.0 
release.  It is a bit concerning that the JIRA history showed that the 
target version was set at some point in the past but no record of it 
being cleared.


Jason

On 01/29/2014 07:58 AM, Arun C Murthy wrote:

Mostly ready for a jira perspective.

Committers - Henceforth, please use extreme caution while committing to 
branch-2.3. Please commit *only* blockers to 2.3.

thanks,
Arun

On Jan 28, 2014, at 3:30 PM, Arun C Murthy a...@hortonworks.com wrote:


Fixing up stuff now, thanks to Andrew for volunteering to help with Common/HDFS.

Arun

On Jan 28, 2014, at 10:54 AM, Arun C Murthy a...@hortonworks.com wrote:


Sorry, missed this. Go ahead, I'll fix things up at the back end. Thanks.

On Jan 28, 2014, at 12:11 AM, Sandy Ryza sandy.r...@cloudera.com wrote:


Going forward with commits because it seems like others have been doing so


On Mon, Jan 27, 2014 at 1:31 PM, Sandy Ryza sandy.r...@cloudera.com wrote:


We should hold off commits until that's done, right?


On Mon, Jan 27, 2014 at 1:07 PM, Arun C Murthy a...@hortonworks.comwrote:


Yep, on it as we speak. :)


Arun

On Jan 27, 2014, at 12:36 PM, Jason Lowe jl...@yahoo-inc.com wrote:


Thanks, Arun.  Are there plans to update the Fix Versions and

CHANGES.txt accordingly?  There are a lot of JIRAs that are now going to
ship in 2.3.0 but the JIRA and CHANGES.txt says they're not fixed until
2.4.0.

Jason

On 01/27/2014 08:47 AM, Arun C Murthy wrote:

Done. I've re-created branch-2.3 from branch-2.

thanks,
Arun

On Jan 23, 2014, at 2:40 AM, Arun Murthy a...@hortonworks.com wrote:


Based on the discussion at common-dev@, we've decided to target 2.3
off the tip of branch-2 based on the 2 major HDFS features which are
Heterogenous Storage (HDFS-2832) and HDFS Cache (HDFS-4949).

I'll create a new branch-2.3 on (1/24) at 6pm PST.

thanks,
Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity
to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified
that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
immediately
and delete it from your system. Thank You.




--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/







Re: Re-swizzle 2.3

2014-01-29 Thread Doug Cutting
On Wed, Jan 29, 2014 at 12:30 PM, Jason Lowe jl...@yahoo-inc.com wrote:
  It is a bit concerning that the JIRA history showed that the target version
 was set at some point in the past but no record of it being cleared.

Perhaps the version itself was renamed?

Doug


[jira] [Created] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2014-01-29 Thread Joe Kelley (JIRA)
Joe Kelley created HADOOP-10309:
---

 Summary: S3 block filesystem should more aggressively delete 
temporary files
 Key: HADOOP-10309
 URL: https://issues.apache.org/jira/browse/HADOOP-10309
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Joe Kelley
Priority: Minor


The S3 FileSystem reading implementation downloads block files into a 
configurable temporary directory. deleteOnExit() is called on these files, so 
they are deleted when the JVM exits.

However, JVM reuse can lead to JVMs that stick around for a very long time. 
This can cause these temporary files to build up indefinitely and, in the worst 
case, fill up the local directory.

After a block file has been read, there is no reason to keep it around. It 
should be deleted.

Writing to the S3 FileSystem already has this behavior; after a temporary block 
file is written and uploaded to S3, it is deleted immediately; there is no need 
to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Re-swizzle 2.3

2014-01-29 Thread Andrew Wang
I just finished tuning up branch-2.3 and fixing up the HDFS and Common
CHANGES.txt in trunk, branch-2, and branch-2.3. I had to merge back a few
JIRAs committed between the swizzle and now where the fix version was 2.3
but weren't in branch-2.3.

I think the only two HDFS and Common JIRAs that are marked for 2.4 are
these:

HDFS-5842 Cannot create hftp filesystem when using a proxy user ugi and a
doAs on a secure cluster
HDFS-5781 Use an array to record the mapping between FSEditLogOpCode and
the corresponding byte value

Jing, these both look safe to me if you want to merge them back, or I can
just do it.

Thanks,
Andrew

On Wed, Jan 29, 2014 at 1:21 PM, Doug Cutting cutt...@apache.org wrote:

 On Wed, Jan 29, 2014 at 12:30 PM, Jason Lowe jl...@yahoo-inc.com wrote:
   It is a bit concerning that the JIRA history showed that the target
version
  was set at some point in the past but no record of it being cleared.

 Perhaps the version itself was renamed?

 Doug


[jira] [Created] (HADOOP-10310) SaslRpcServer should be initialized even when no secret manager present

2014-01-29 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-10310:
---

 Summary: SaslRpcServer should be initialized even when no secret 
manager present
 Key: HADOOP-10310
 URL: https://issues.apache.org/jira/browse/HADOOP-10310
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Blocker


HADOOP-8983 made a change which caused the SaslRpcServer not to be initialized 
if there is no secret manager present. This works fine for most Hadoop daemons 
because they need a secret manager to do their business, but JournalNodes do 
not. The result of this is that JournalNodes are broken and will not handle 
RPCs in a Kerberos-enabled environment, since the SaslRpcServer will not be 
initialized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Re-swizzle 2.3

2014-01-29 Thread Aaron T. Myers
I just filed this JIRA as a blocker for 2.3:
https://issues.apache.org/jira/browse/HADOOP-10310

The tl;dr is that JNs will not work with security enabled without this fix.
If others don't think that supporting QJM with security enabled warrants a
blocker for 2.3, then we can certainly lower the priority, but it seems
pretty important to me.

Best,
Aaron

--
Aaron T. Myers
Software Engineer, Cloudera


On Wed, Jan 29, 2014 at 6:24 PM, Andrew Wang andrew.w...@cloudera.comwrote:

 I just finished tuning up branch-2.3 and fixing up the HDFS and Common
 CHANGES.txt in trunk, branch-2, and branch-2.3. I had to merge back a few
 JIRAs committed between the swizzle and now where the fix version was 2.3
 but weren't in branch-2.3.

 I think the only two HDFS and Common JIRAs that are marked for 2.4 are
 these:

 HDFS-5842 Cannot create hftp filesystem when using a proxy user ugi and a
 doAs on a secure cluster
 HDFS-5781 Use an array to record the mapping between FSEditLogOpCode and
 the corresponding byte value

 Jing, these both look safe to me if you want to merge them back, or I can
 just do it.

 Thanks,
 Andrew

 On Wed, Jan 29, 2014 at 1:21 PM, Doug Cutting cutt...@apache.org wrote:
 
  On Wed, Jan 29, 2014 at 12:30 PM, Jason Lowe jl...@yahoo-inc.com
 wrote:
It is a bit concerning that the JIRA history showed that the target
 version
   was set at some point in the past but no record of it being cleared.
 
  Perhaps the version itself was renamed?
 
  Doug



Re: Re-swizzle 2.3

2014-01-29 Thread Stack
I filed https://issues.apache.org/jira/browse/HDFS-5852 as a blocker.  See
what ye all think.

Thanks,
St.Ack


On Wed, Jan 29, 2014 at 3:52 PM, Aaron T. Myers a...@cloudera.com wrote:

 I just filed this JIRA as a blocker for 2.3:
 https://issues.apache.org/jira/browse/HADOOP-10310

 The tl;dr is that JNs will not work with security enabled without this fix.
 If others don't think that supporting QJM with security enabled warrants a
 blocker for 2.3, then we can certainly lower the priority, but it seems
 pretty important to me.

 Best,
 Aaron

 --
 Aaron T. Myers
 Software Engineer, Cloudera


 On Wed, Jan 29, 2014 at 6:24 PM, Andrew Wang andrew.w...@cloudera.com
 wrote:

  I just finished tuning up branch-2.3 and fixing up the HDFS and Common
  CHANGES.txt in trunk, branch-2, and branch-2.3. I had to merge back a few
  JIRAs committed between the swizzle and now where the fix version was 2.3
  but weren't in branch-2.3.
 
  I think the only two HDFS and Common JIRAs that are marked for 2.4 are
  these:
 
  HDFS-5842 Cannot create hftp filesystem when using a proxy user ugi and a
  doAs on a secure cluster
  HDFS-5781 Use an array to record the mapping between FSEditLogOpCode and
  the corresponding byte value
 
  Jing, these both look safe to me if you want to merge them back, or I can
  just do it.
 
  Thanks,
  Andrew
 
  On Wed, Jan 29, 2014 at 1:21 PM, Doug Cutting cutt...@apache.org
 wrote:
  
   On Wed, Jan 29, 2014 at 12:30 PM, Jason Lowe jl...@yahoo-inc.com
  wrote:
 It is a bit concerning that the JIRA history showed that the target
  version
was set at some point in the past but no record of it being cleared.
  
   Perhaps the version itself was renamed?
  
   Doug
 



Re: Maintaining documentation

2014-01-29 Thread Masatake Iwasaki

Hi JIRA and Wiki admins,

I would like permission to assign JIRA issues to myself
and edit Hadoop Wiki pages.
I have already fixed some on HADOOP-10086 and
I think there is further work.

My username is iwasakims on JIRA
and MasatakeIwasaki on Hadoop Wiki.

Regards,
Masatake Iwasaki



(2014/01/28 15:47), Arpit Agarwal wrote:

Done, thanks guys for finding and fixing doc issues!


On Mon, Jan 27, 2014 at 10:27 PM, Akira AJISAKA
ajisa...@oss.nttdata.co.jpwrote:


Thank you for reviewing and committing, Arpit!



HADOOP-6350 - not sure of the status. Is the attached patch ready for
review?


The patch is not ready. I need to add the new metrics for the latest trunk.
I'll create a new patch in this week.

In addition, could you please review HADOOP-9830?
This issue is to fix a typo at the top page and I think it's easy to
review.

Akira


(2014/01/28 6:44), Arpit Agarwal wrote:


I committed the following patches today:
HADOOP-10086
   HADOOP-9982
   HADOOP-10212
HDFS-5297

Remaining from your initial list:
HADOOP-10139 - needs to be rebased
HDFS-5492 - I will review it this week.
HADOOP-6350 - not sure of the status. Is the attached patch ready for
review?

I will look at the other common/HDFS patches as I get time after this
week,
unless another committer wants to take a look.

Arpit

On Wed, Jan 22, 2014 at 2:40 AM, Akira AJISAKA
ajisa...@oss.nttdata.co.jpwrote:

  Akira, could you please resend a list of your patches that need



reviewing



if you have it handy.





Here is the list of all the documentation patches I have.


https://issues.apache.org/jira/browse/HADOOP-6350?jql=
project%20in%20(HADOOP%2C%20YARN%2C%20HDFS%2C%20MAPREDUCE)%20AND%
20resolution%20%3D%20Unresolved%20AND%20component%20%3D%
20documentation%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%
20assignee%20in%20(ajisakaa)%20ORDER%20BY%20priority%20DESC

In the above list, the JIRAs I think high priority are as follows:

   * HADOOP-6350
   * HADOOP-9982
   * HADOOP-10139
   * HADOOP-10212
   * YARN-1531
   * HDFS-5297
   * HDFS-5492
   * MAPREDUCE-4282

I will appreciate your reviews.

Thanks,
Akira


(2014/01/22 13:17), Masatake Iwasaki wrote:

 I will review

HADOOP-10086https://issues.apache.org/jira/browse/HADOOP-10086this
week unless another committer gets to it first.

I appreciate it Arpit!


Thanks for your efforts to clean up the documentation. Unfortunately
some
fixes can get lost in the noise and this is true not just for
documentation
fixes, so thanks for bringing these to our attention.

I will do what I can do to improve this. Such as helping review process.


Regards,
Masatake Iwasaki


(2014/01/22 3:22), Arpit Agarwal wrote:

  I will review

HADOOP-10086https://issues.apache.org/jira/browse/HADOOP-10086this
week unless another committer gets to it first.

Akira, could you please resend a list of your patches that need
reviewing
if you have it handy.

Thanks for your efforts to clean up the documentation. Unfortunately
some
fixes can get lost in the noise and this is true not just for
documentation
fixes, so thanks for bringing these to our attention.

Regards,
Arpit




On Mon, Jan 20, 2014 at 11:47 PM, Akira AJISAKA
ajisa...@oss.nttdata.co.jpwrote:

   Hi All,



I wrote several patches for documentation
but most of them are not reviewed.

I want to improve not only the new features but also
the documents. I also think the documents are not
well maintained mainly because :

* Deprecated commands and parameters still exist.
* Undocumented command options exist.
* There're a lot of dead links.

I'll be happy when the above problems are fixed for 2.4 release.
I'm waiting for your advices and reviews!

Thanks Masatake for this proposal.

Regards,
Akira

(2014/01/21 9:03), Masatake Iwasaki wrote:

  Hi developers,


I wrote a patch for Hadoop's site documentation and would like
reviews.
https://issues.apache.org/jira/browse/HADOOP-10086

Though I would like to fix documentation further,
there seems to be many open JIRA issues concerning documents.
I think the documents is not well maintained now
because the documentation work does not tend to get
enough attention of reviewers and commiters.

In order to impove the situation,
should I write Wiki pages rather than site documentation
or refactor whole documentation structure
in the way like HBase book (http://hbase.apache.org/book/book.html)?

I will appreciate your advices.

Regards,
Masatake Iwasaki






















[jira] [Created] (HADOOP-10311) Cleanup vendor names in the code base

2014-01-29 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-10311:


 Summary: Cleanup vendor names in the code base
 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Re-swizzle 2.3

2014-01-29 Thread Vinod Kumar Vavilapalli

Okay, I'll look at YARN and MR CHANGES.txt problems. Seems like they aren't 
addressed yet.

+Vinod


On Jan 29, 2014, at 3:24 PM, Andrew Wang andrew.w...@cloudera.com wrote:

 I just finished tuning up branch-2.3 and fixing up the HDFS and Common
 CHANGES.txt in trunk, branch-2, and branch-2.3. I had to merge back a few
 JIRAs committed between the swizzle and now where the fix version was 2.3
 but weren't in branch-2.3.
 
 I think the only two HDFS and Common JIRAs that are marked for 2.4 are
 these:
 
 HDFS-5842 Cannot create hftp filesystem when using a proxy user ugi and a
 doAs on a secure cluster
 HDFS-5781 Use an array to record the mapping between FSEditLogOpCode and
 the corresponding byte value
 
 Jing, these both look safe to me if you want to merge them back, or I can
 just do it.
 
 Thanks,
 Andrew
 
 On Wed, Jan 29, 2014 at 1:21 PM, Doug Cutting cutt...@apache.org wrote:
 
 On Wed, Jan 29, 2014 at 12:30 PM, Jason Lowe jl...@yahoo-inc.com wrote:
 It is a bit concerning that the JIRA history showed that the target
 version
 was set at some point in the past but no record of it being cleared.
 
 Perhaps the version itself was renamed?
 
 Doug


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


signature.asc
Description: Message signed with OpenPGP using GPGMail