[jira] [Updated] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-24 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8201:
---

Fix Version/s: 1.0.2
   Issue Type: Bug  (was: Improvement)

Committing to branch-1.0.2, branch-1.0, and branch-1.
Thanks, Giri!

> create the configure script for native compilation as part of the build
> ---
>
> Key: HADOOP-8201
> URL: https://issues.apache.org/jira/browse/HADOOP-8201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Fix For: 1.0.2
>
> Attachments: HADOOP-8201.patch
>
>
> configure script is checked into svn and its not regenerated during build. 
> Ideally configure scritp should not be checked into svn and instead should be 
> generated during build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8201:
---

Priority: Blocker  (was: Major)

> create the configure script for native compilation as part of the build
> ---
>
> Key: HADOOP-8201
> URL: https://issues.apache.org/jira/browse/HADOOP-8201
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: HADOOP-8201.patch
>
>
> configure script is checked into svn and its not regenerated during build. 
> Ideally configure scritp should not be checked into svn and instead should be 
> generated during build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8201:
---

Description: 
configure script is checked into svn and its not regenerated during build. 
Ideally configure scritp should not be checked into svn and instead should be 
generated during build using autoreconf.


  was:
configure script is checked into svn and its not regenerated during build. 
Ideally configure scritp should not be checked into svn and generated during 
build using autoreconf.


Summary: create the configure script for native compilation as part of 
the build  (was: Snappy: create the configure script for native compilation as 
part of the build)

> create the configure script for native compilation as part of the build
> ---
>
> Key: HADOOP-8201
> URL: https://issues.apache.org/jira/browse/HADOOP-8201
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: HADOOP-8201.patch
>
>
> configure script is checked into svn and its not regenerated during build. 
> Ideally configure scritp should not be checked into svn and instead should be 
> generated during build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8201) Snappy: create the configure script for native compilation as part of the build

2012-03-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8201:
---

Summary: Snappy: create the configure script for native compilation as part 
of the build  (was: create the configure script for native compilation as part 
of the build)

> Snappy: create the configure script for native compilation as part of the 
> build
> ---
>
> Key: HADOOP-8201
> URL: https://issues.apache.org/jira/browse/HADOOP-8201
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: HADOOP-8201.patch
>
>
> configure script is checked into svn and its not regenerated during build. 
> Ideally configure scritp should not be checked into svn and generated during 
> build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7381) FindBugs OutOfMemoryError

2012-03-20 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7381:
---

Target Version/s: 1.0.3

Yes, let's get this in the next release.
+1 for code review.

> FindBugs OutOfMemoryError
> -
>
> Key: HADOOP-7381
> URL: https://issues.apache.org/jira/browse/HADOOP-7381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
> Environment: FindBugs 1.3.9, ant 1.8.2, RHEL6, Jenkins 1.414 in 
> Tomcat 7.0.14, Sun Java HotSpot(TM) 64-Bit Server VM (build 14.2-b01, mixed 
> mode)
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: hadoop-7381.patch
>
>
> When running the findbugs target from Jenkins, I get an OutOfMemory error.
> The "effort" in FindBugs is set to Max which ends up using a lot of memory to 
> go through all the classes. The jvmargs passed to FindBugs is hardcoded to 
> 512 MB max.
> We can leave the default to 512M, as long as we pass this as an ant parameter 
> which can be overwritten in individual cases through -D, or in the 
> build.properties file (either basedir, or user's home directory).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8166) Remove JDK 1.5 dependency from building forrest docs

2012-03-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8166:
---

Target Version/s: 1.0.3
   Fix Version/s: (was: 1.0.2)

> Remove JDK 1.5 dependency from building forrest docs
> 
>
> Key: HADOOP-8166
> URL: https://issues.apache.org/jira/browse/HADOOP-8166
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.20.203.0, 0.20.204.0, 0.20.205.0, 1.0.0, 1.0.1
>Reporter: Mark Butler
> Attachments: forrest.patch, hadoop-8166.txt
>
>
> Currently Hadoop requires both JDK 1.6 and JDK 1.5. JDK 1.5 is a requirement 
> of Forrest. It is easy to remove the latter requirement by turning off 
> forrest.validate.sitemap and forrest.validate.skins.stylesheets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions

2012-03-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8151:
---

Target Version/s: 0.24.0, 0.23.3, 1.0.3  (was: 1.0.2, 0.23.3, 0.24.0)

> Error handling in snappy decompressor throws invalid exceptions
> ---
>
> Key: HADOOP-8151
> URL: https://issues.apache.org/jira/browse/HADOOP-8151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Todd Lipcon
>Assignee: Matt Foley
> Attachments: HADOOP-8151-branch-1.0.patch
>
>
> SnappyDecompressor.c has the following code in a few places:
> {code}
> THROW(env, "Ljava/lang/InternalError", "Could not decompress data. Buffer 
> length is too small.");
> {code}
> this is incorrect, though, since the THROW macro doesn't need the "L" before 
> the class name. This results in a ClassNotFoundException for 
> Ljava.lang.InternalError being thrown, instead of the intended exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8136) Enhance hadoop to use a newer version (0.8.1) of the jets3t library

2012-03-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8136:
---

Target Version/s: 1.0.3  (was: 1.0.2)

> Enhance hadoop to use a newer version (0.8.1) of the jets3t library
> ---
>
> Key: HADOOP-8136
> URL: https://issues.apache.org/jira/browse/HADOOP-8136
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 0.22.0, 1.0.0, 0.23.3
> Environment: Ubuntu 11.04, 64 bit, JDK 1.6.0_30
>Reporter: Jagane Sundar
> Attachments: HADOOP-8136-0-for_branch_1_0.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hadoop is built against, and includes, an older version of the Jets3t library 
> - version 0.6.1.
> The current version of the Jets3t library(as of March 2012) is 0.8.1. This 
> new version includes many improvements such as support for "Requester-Pays" 
> buckets.
> Since hadoop includes a copy of the version 0.6.1 jets3t library, and since 
> this version ends up early in the CLASSPATH, any Map Reduce application that 
> wants to use the jets3t library ends up getting version 0.6.1 of the jets3t 
> library. The MR application fails, usually with an error stating that the 
> method signature of some method in the Jets3t library does not match.
> It would be useful to enhance Jets3tNativeFileSystemStore.java to use the API 
> published by the 0.8.1 version of the jets3t library.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions

2012-03-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8151:
---

Attachment: HADOOP-8151-branch-1.0.patch

Examination of related files shows Todd's comment is manifestly correct.  
Attached is a patch for branch-1.0.  Please review.

> Error handling in snappy decompressor throws invalid exceptions
> ---
>
> Key: HADOOP-8151
> URL: https://issues.apache.org/jira/browse/HADOOP-8151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Todd Lipcon
>Assignee: Matt Foley
> Attachments: HADOOP-8151-branch-1.0.patch
>
>
> SnappyDecompressor.c has the following code in a few places:
> {code}
> THROW(env, "Ljava/lang/InternalError", "Could not decompress data. Buffer 
> length is too small.");
> {code}
> this is incorrect, though, since the THROW macro doesn't need the "L" before 
> the class name. This results in a ClassNotFoundException for 
> Ljava.lang.InternalError being thrown, instead of the intended exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8132) 64bit secure datanodes do not start as the jsvc path is wrong

2012-03-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8132:
---

  Resolution: Fixed
   Fix Version/s: 1.0.2
Target Version/s: 1.0.2  (was: 1.0.0)
  Status: Resolved  (was: Patch Available)

Committed to branch-1.0 and branch-1.
Thanks, Arpit!

> 64bit secure datanodes do not start as the jsvc path is wrong
> -
>
> Key: HADOOP-8132
> URL: https://issues.apache.org/jira/browse/HADOOP-8132
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 1.0.2
>
> Attachments: HADOOP-8132.branch-1.0.patch
>
>
> 64bit secure datanodes were looking for /usr/libexec/../libexec/jsvc. instead 
> of /usr/libexec/../libexec/jsvc.amd64

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8050) Deadlock in metrics

2012-03-17 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8050:
---

Fix Version/s: (was: 1.0.1)
   1.0.2

Was committed to 1.0.2, not 1.0.1.

> Deadlock in metrics
> ---
>
> Key: HADOOP-8050
> URL: https://issues.apache.org/jira/browse/HADOOP-8050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.204.0, 0.20.205.0, 0.23.0, 1.0.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 0.23.2, 1.0.2
>
> Attachments: hadoop-8050-branch-1.patch.txt, 
> hadoop-8050-branch-1.patch.txt, hadoop-8050-branch-1.patch.txt, 
> hadoop-8050-branch-1.patch.txt, hadoop-8050-trunk.patch.txt, 
> hadoop-8050-trunk.patch.txt, hadoop-8050-trunk.patch.txt, 
> hadoop-8050.patch.txt
>
>
> The metrics serving thread and the periodic snapshot thread can deadlock.
> It happened a few times on one of namenodes we have. When it happens RPC 
> works but the web ui and hftp stop working. I haven't look at the trunk too 
> closely, but it might happen there too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7206) Integrate Snappy compression

2012-03-02 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7206:
---

Fix Version/s: (was: 1.0.1)
   1.0.2

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0, 1.0.2
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206-20120223.txt, 
> HADOOP-7206-20120302.txt, HADOOP-7206.patch, HADOOP-7206new-b.patch, 
> HADOOP-7206new-c.patch, HADOOP-7206new.patch, 
> HADOOP-7206revertplusnew-b.patch, HADOOP-7206revertplusnew.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8090) rename hadoop 64 bit rpm/deb package name

2012-02-19 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8090:
---

Attachment: HADOOP-8090-packagename-v2.patch

The platform-specific binary tarballs now use the platform tag in the artifact 
name, just like the rpm and deb artifacts; see HADOOP-8037.  To stay 
consistent, this should also switch ${os.arch} to ${os-arch}.  This is done in 
the attached revised patch.  

Also, what about the directory paths, i.e., should "Linux-amd64-64" become 
"Linux-x86_64-64"?  Please review.  Thanks.

> rename hadoop 64 bit rpm/deb package name
> -
>
> Key: HADOOP-8090
> URL: https://issues.apache.org/jira/browse/HADOOP-8090
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: HADOOP-8090-packagename-v2.patch, packagename.patch
>
>
> change hadoop rpm/deb name from hadoop-.amd64.rpm/deb 
> hadoop-.x86_64.rpm/deb   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8050) Deadlock in metrics

2012-02-19 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8050:
---

   Resolution: Fixed
Fix Version/s: (was: 0.24.0)
   (was: 1.1.0)
   Status: Resolved  (was: Patch Available)

Committed to branch-1.0, branch-1, branch-0.23, and trunk.
Thanks, Kihwal and Luke!

> Deadlock in metrics
> ---
>
> Key: HADOOP-8050
> URL: https://issues.apache.org/jira/browse/HADOOP-8050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.20.204.0, 0.20.205.0, 0.23.0, 1.0.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 1.0.1, 0.23.2
>
> Attachments: hadoop-8050-branch-1.patch.txt, 
> hadoop-8050-branch-1.patch.txt, hadoop-8050-branch-1.patch.txt, 
> hadoop-8050-branch-1.patch.txt, hadoop-8050-trunk.patch.txt, 
> hadoop-8050-trunk.patch.txt, hadoop-8050-trunk.patch.txt, 
> hadoop-8050.patch.txt
>
>
> The metrics serving thread and the periodic snapshot thread can deadlock.
> It happened a few times on one of namenodes we have. When it happens RPC 
> works but the web ui and hftp stop working. I haven't look at the trunk too 
> closely, but it might happen there too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2012-02-14 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8009:
---

Attachment: HADOOP-8009-branch-1-add.patch

> Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
> --
>
> Key: HADOOP-8009
> URL: https://issues.apache.org/jira/browse/HADOOP-8009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0, 0.23.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.1, 1.0.1
>
> Attachments: HADOOP-8009-branch-1-add.patch, 
> HADOOP-8009-branch-1.patch, HADOOP-8009-existing-releases.patch, 
> HADOOP-8009.patch
>
>
> Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
> system that interacts with Hadoop is quite challenging for the following 
> reasons:
> * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
> 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
> are several (common, hdfs, mapred*, yarn*)
> * *There are no 'client' artifacts:* Current artifacts include all JARs 
> needed to run the services, thus bringing into clients several JARs that are 
> not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
> * *Doing testing on the client side is also quite challenging as more 
> artifacts have to be included than the dependencies define:* for example, the 
> history-server artifact has to be explicitly included. If using Hadoop 1 
> artifacts, jersey-server has to be explicitly included.
> * *3rd party dependencies change in Hadoop from version to version:* This 
> makes things complicated for projects that have to deal with multiple 
> versions of Hadoop as their exclusions list become a huge mix & match of 
> artifacts from different Hadoop versions and it may be break things when a 
> particular version of Hadoop requires a dependency that other version of 
> Hadoop does not require.
> Because of this it would be quite convenient to have the following 
> 'aggregator' artifacts:
> * *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
> Hadoop client APIs (excluding all JARs that are not needed for it)
> * *org.apache.hadoop:hadoop-minicluster* : it includes all required JARs to 
> run Hadoop Mini Clusters
> These aggregator artifacts would be created for current branches under 
> development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
> in use.
> For branches under development, these artifacts would be generated as part of 
> the build.
> For released versions we would have a a special branch used only as vehicle 
> for publishing the corresponding 'aggregator' artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-12 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Release Note: This fix is marked "incompatible" only because it changes the 
bin-tarball directory structure to be consistent with the source tarball 
directory structure.  The source tarball is unchanged.  RPMs and DEBs now use 
an intermediate bin-tarball with an "${os.arch}" tag (like the packages 
themselves). The un-tagged bin-tarball is now multi-platform and retains the 
structure of the source tarball; it is in fact generated by target "tar", not 
by target "binary". Finally, in the 64-bit RPMs and DEBs, the native libs go in 
the "lib64" directory instead of "lib".  (was: This fix is marked 
"incompatible" only because it changes the bin-tarball directory structure to 
be consistent with the source tarball directory structure.  Everything else (in 
particular, the source tarball and rpm directory structures) are unchanged, 
except that the 64-bit rpms and debs now use lib64 instead of lib for native 
libraries.)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Matt Foley
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037-2.patch, hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-12 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Release Note: This fix is marked "incompatible" only because it changes the 
bin-tarball directory structure to be consistent with the source tarball 
directory structure.  Everything else (in particular, the source tarball and 
rpm directory structures) are unchanged, except that the 64-bit rpms and debs 
now use lib64 instead of lib for native libraries.  (was: This fix is marked 
"incompatible" only because it changes the bin-tarball directory structure to 
be consistent with the source tarball directory structure.  Everything else (in 
particular, the source tarball and rpm directory structures) are unchanged, 
except that the 64-bit rpms and debs now use lib64 instead of lib for native 
builds.)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037-2.patch, hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-12 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Release Note: This fix is marked "incompatible" only because it changes the 
bin-tarball directory structure to be consistent with the source tarball 
directory structure.  Everything else (in particular, the source tarball and 
rpm directory structures) are unchanged, except that the 64-bit rpms and debs 
use lib64 instead of lib for native builds.  (was: This fix is marked 
"incompatible" only because it changes the bin-tarball directory structure to 
be consistent with the source tarball directory structure.  Everything else (in 
particular, the source tarball and rpm directory structures) are unchanged.)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037-2.patch, hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-12 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Release Note: This fix is marked "incompatible" only because it changes the 
bin-tarball directory structure to be consistent with the source tarball 
directory structure.  Everything else (in particular, the source tarball and 
rpm directory structures) are unchanged, except that the 64-bit rpms and debs 
now use lib64 instead of lib for native builds.  (was: This fix is marked 
"incompatible" only because it changes the bin-tarball directory structure to 
be consistent with the source tarball directory structure.  Everything else (in 
particular, the source tarball and rpm directory structures) are unchanged, 
except that the 64-bit rpms and debs use lib64 instead of lib for native 
builds.)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037-2.patch, hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-12 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Attachment: hadoop-8037-2.patch

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037-2.patch, hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8052) Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to avoid making Ganglia's gmetad core

2012-02-11 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8052:
---

  Resolution: Fixed
   Fix Version/s: 0.23.2
  1.0.1
Target Version/s: 1.0.1, 0.23.2  (was: 1.0.1, 0.23.1)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to 
> avoid making Ganglia's gmetad core
> ---
>
> Key: HADOOP-8052
> URL: https://issues.apache.org/jira/browse/HADOOP-8052
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.23.0, 1.0.0
>Reporter: Varun Kapoor
>Assignee: Varun Kapoor
>  Labels: patch
> Fix For: 1.0.1, 0.23.2
>
> Attachments: HADOOP-8052-branch-1.patch, HADOOP-8052-branch-1.patch, 
> HADOOP-8052.patch, HADOOP-8052.patch
>
>
> Ganglia's gmetad converts the doubles emitted by Hadoop's Metrics2 system to 
> strings, and the buffer it uses is 256 bytes wide.
> When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits 
> its default min value (currently initialized to Double.MAX_VALUE), it ends up 
> causing a buffer overflow in gmetad, which causes it to core, effectively 
> rendering Ganglia useless (for some, the core is continuous; for others who 
> are more fortunate, it's only a one-time Hadoop-startup-time thing).
> The fix needed to Ganglia is simple - the buffer needs to be bumped up to be 
> 512 bytes wide, and all will be well - but instead of requiring a minimum 
> version of Ganglia to work with Hadoop's Metrics2 system, it might be more 
> prudent to just use Float.MAX_VALUE.
> An additional problem caused in librrd (which Ganglia uses 
> beneath-the-covers) by the use of Double.MIN_VALUE (which functions as the 
> default max value) is an underflow when librrd runs the received strings 
> through libc's strtod(), but the librrd code is good enough to check for 
> this, and only emits a warning - moving to Float.MIN_VALUE fixes that as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2012-02-11 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8009:
---

 Description: 
Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
system that interacts with Hadoop is quite challenging for the following 
reasons:

* *Different versions of Hadoop produce different artifacts:* Before Hadoop 
0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
are several (common, hdfs, mapred*, yarn*)

* *There are no 'client' artifacts:* Current artifacts include all JARs needed 
to run the services, thus bringing into clients several JARs that are not used 
for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)

* *Doing testing on the client side is also quite challenging as more artifacts 
have to be included than the dependencies define:* for example, the 
history-server artifact has to be explicitly included. If using Hadoop 1 
artifacts, jersey-server has to be explicitly included.

* *3rd party dependencies change in Hadoop from version to version:* This makes 
things complicated for projects that have to deal with multiple versions of 
Hadoop as their exclusions list become a huge mix & match of artifacts from 
different Hadoop versions and it may be break things when a particular version 
of Hadoop requires a dependency that other version of Hadoop does not require.

Because of this it would be quite convenient to have the following 'aggregator' 
artifacts:

* *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
Hadoop client APIs (excluding all JARs that are not needed for it)
* *org.apache.hadoop:hadoop-minicluster* : it includes all required JARs to run 
Hadoop Mini Clusters

These aggregator artifacts would be created for current branches under 
development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
in use.

For branches under development, these artifacts would be generated as part of 
the build.

For released versions we would have a a special branch used only as vehicle for 
publishing the corresponding 'aggregator' artifacts.


  was:
Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
system that interacts with Hadoop is quite challenging for the following 
reasons:

* *Different versions of Hadoop produce different artifacts:* Before Hadoop 
0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
are several (common, hdfs, mapred*, yarn*)

* *There are no 'client' artifacts:* Current artifacts include all JARs needed 
to run the services, thus bringing into clients several JARs that are not used 
for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)

* *Doing testing on the client side is also quite challenging as more artifacts 
have to be included than the dependencies define:* for example, the 
history-server artifact has to be explicitly included. If using Hadoop 1 
artifacts, jersey-server has to be explicitly included.

* *3rd party dependencies change in Hadoop from version to version:* This makes 
things complicated for projects that have to deal with multiple versions of 
Hadoop as their exclusions list become a huge mix & match of artifacts from 
different Hadoop versions and it may be break things when a particular version 
of Hadoop requires a dependency that other version of Hadoop does not require.

Because of this it would be quite convenient to have the following 'aggregator' 
artifacts:

* *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
Hadoop client APIs (excluding all JARs that are not needed for it)
* *org.apache.hadoop:hadoop-test* : it includes all required JARs to run Hadoop 
Mini Clusters

These aggregator artifacts would be created for current branches under 
development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
in use.

For branches under development, these artifacts would be generated as part of 
the build.

For released versions we would have a a special branch used only as vehicle for 
publishing the corresponding 'aggregator' artifacts.


Release Note: Generate integration artifacts 
"org.apache.hadoop:hadoop-client" and "org.apache.hadoop:hadoop-minicluster" 
containing all the jars needed to use Hadoop client APIs, and to run Hadoop 
MiniClusters, respectively.  Push these artifacts to the maven repository when 
mvn-deploy, along with existing artifacts.   (was: Generate integration 
artifacts "org.apache.hadoop:hadoop-client" and "org.apache.hadoop:hadoop-test" 
containing all the jars needed to use Hadoop client APIs, and to run Hadoop 
Mini Clusters, respectively.  Push these artifacts to the maven repository when 
mvn-deploy, along with existing artifacts. )

> Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
> 

[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Fix Version/s: (was: 1.0.1)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Priority: Blocker  (was: Major)

Changing this to blocker, because it did in fact block the 1.0.1 RC for several 
days while we straightened it out.

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8009:
---

Release Note: Generate integration artifacts 
"org.apache.hadoop:hadoop-client" and "org.apache.hadoop:hadoop-test" 
containing all the jars needed to use Hadoop client APIs, and to run Hadoop 
Mini Clusters, respectively.  Push these artifacts to the maven repository when 
mvn-deploy, along with existing artifacts.   (was: Generate integration 
artifacts *org.apache.hadoop:hadoop-client* and *org.apache.hadoop:hadoop-test* 
containing all the jars needed to use Hadoop client APIs, and to run Hadoop 
Mini Clusters, respectively.  Push these artifacts to the maven repository when 
mvn-deploy, along with existing artifacts. )

> Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
> --
>
> Key: HADOOP-8009
> URL: https://issues.apache.org/jira/browse/HADOOP-8009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0, 0.23.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.1, 1.0.1
>
> Attachments: HADOOP-8009-branch-1.patch, 
> HADOOP-8009-existing-releases.patch, HADOOP-8009.patch
>
>
> Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
> system that interacts with Hadoop is quite challenging for the following 
> reasons:
> * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
> 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
> are several (common, hdfs, mapred*, yarn*)
> * *There are no 'client' artifacts:* Current artifacts include all JARs 
> needed to run the services, thus bringing into clients several JARs that are 
> not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
> * *Doing testing on the client side is also quite challenging as more 
> artifacts have to be included than the dependencies define:* for example, the 
> history-server artifact has to be explicitly included. If using Hadoop 1 
> artifacts, jersey-server has to be explicitly included.
> * *3rd party dependencies change in Hadoop from version to version:* This 
> makes things complicated for projects that have to deal with multiple 
> versions of Hadoop as their exclusions list become a huge mix & match of 
> artifacts from different Hadoop versions and it may be break things when a 
> particular version of Hadoop requires a dependency that other version of 
> Hadoop does not require.
> Because of this it would be quite convenient to have the following 
> 'aggregator' artifacts:
> * *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
> Hadoop client APIs (excluding all JARs that are not needed for it)
> * *org.apache.hadoop:hadoop-test* : it includes all required JARs to run 
> Hadoop Mini Clusters
> These aggregator artifacts would be created for current branches under 
> development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
> in use.
> For branches under development, these artifacts would be generated as part of 
> the build.
> For released versions we would have a a special branch used only as vehicle 
> for publishing the corresponding 'aggregator' artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8009:
---

   Resolution: Fixed
Fix Version/s: 1.0.1
 Release Note: Generate integration artifacts 
*org.apache.hadoop:hadoop-client* and *org.apache.hadoop:hadoop-test* 
containing all the jars needed to use Hadoop client APIs, and to run Hadoop 
Mini Clusters, respectively.  Push these artifacts to the maven repository when 
mvn-deploy, along with existing artifacts. 
   Status: Resolved  (was: Patch Available)

+1, lgtm.  I tested this in the 1.0.1 build, and the results looked correct.  
Committing to branch-1 and branch-1.0.  Thanks Alejandro!

> Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
> --
>
> Key: HADOOP-8009
> URL: https://issues.apache.org/jira/browse/HADOOP-8009
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0, 0.23.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.1, 1.0.1
>
> Attachments: HADOOP-8009-branch-1.patch, 
> HADOOP-8009-existing-releases.patch, HADOOP-8009.patch
>
>
> Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
> system that interacts with Hadoop is quite challenging for the following 
> reasons:
> * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
> 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
> are several (common, hdfs, mapred*, yarn*)
> * *There are no 'client' artifacts:* Current artifacts include all JARs 
> needed to run the services, thus bringing into clients several JARs that are 
> not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
> * *Doing testing on the client side is also quite challenging as more 
> artifacts have to be included than the dependencies define:* for example, the 
> history-server artifact has to be explicitly included. If using Hadoop 1 
> artifacts, jersey-server has to be explicitly included.
> * *3rd party dependencies change in Hadoop from version to version:* This 
> makes things complicated for projects that have to deal with multiple 
> versions of Hadoop as their exclusions list become a huge mix & match of 
> artifacts from different Hadoop versions and it may be break things when a 
> particular version of Hadoop requires a dependency that other version of 
> Hadoop does not require.
> Because of this it would be quite convenient to have the following 
> 'aggregator' artifacts:
> * *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
> Hadoop client APIs (excluding all JARs that are not needed for it)
> * *org.apache.hadoop:hadoop-test* : it includes all required JARs to run 
> Hadoop Mini Clusters
> These aggregator artifacts would be created for current branches under 
> development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
> in use.
> For branches under development, these artifacts would be generated as part of 
> the build.
> For released versions we would have a a special branch used only as vehicle 
> for publishing the corresponding 'aggregator' artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Summary: Binary tarball does not preserve platform info for native builds, 
and RPMs fail to provide needed symlinks for libhadoop.so  (was: Binary tarball 
does not preserve platform info for native builds, and fails to provide needed 
symlinks for libhadoop.so)

> Binary tarball does not preserve platform info for native builds, and RPMs 
> fail to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
> Attachments: hadoop-8027-1.patch, hadoop-8037-1.patch, 
> hadoop-8037.patch
>
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and fails to provide needed symlinks for libhadoop.so

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Release Note: This fix is marked "incompatible" only because it changes the 
bin-tarball directory structure to be consistent with the source tarball 
directory structure.  Everything else (in particular, the source tarball and 
rpm directory structures) are unchanged.

> Binary tarball does not preserve platform info for native builds, and fails 
> to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8037) Binary tarball does not preserve platform info for native builds, and fails to provide needed symlinks for libhadoop.so

2012-02-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8037:
---

Summary: Binary tarball does not preserve platform info for native builds, 
and fails to provide needed symlinks for libhadoop.so  (was: Binary tarball 
does not preserve platform info for native builds)

There's a secondary issue:  The code fragment in bin-package that does the 
stripping of platform names from the directory paths is this: {code}

  

  


{code}
This usage of the  ant task fails to grab the symlinks that are important 
for libhadoop.  Prior to this, the platform-specific subdirectories contain, 
e.g., {code}

./hadoop-1.0.1/native/Linux-i386-32/libhadoop.so
./hadoop-1.0.1/native/Linux-i386-32/libhadoop.so.1
./hadoop-1.0.1/native/Linux-i386-32/libhadoop.so.1.0.0
{code}
However, after the projection is done by the  task, we are left with only 
{code}
hadoop-1.0.1/lib/libhadoop.so.1.0.0
{code}
The other two files were symlinks to libhadoop.so.1.0.0, and did not get moved.

As a result, the rpm with native build fails to provide the needed libhadoop.so.

This could be opened as a separate Jira, but both need to be solved in the same 
area of code, so I'm leaving them together.  I'll add to the bug title.

> Binary tarball does not preserve platform info for native builds, and fails 
> to provide needed symlinks for libhadoop.so
> ---
>
> Key: HADOOP-8037
> URL: https://issues.apache.org/jira/browse/HADOOP-8037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.1
>Reporter: Matt Foley
>Assignee: Giridharan Kesavan
>
> The source tarball uses "package" ant target, which includes both sets of 
> native builds (32 and 64 bit libraries), under subdirectories that are named 
> for the supported platform, so you can tell what they are.
> The binary tarball uses the "bin-package" ant target, which projects both 
> sets of native builds into a single directory, stripping out the platform 
> names from the directory paths.  Since the native built libraries have 
> identical names, only one of each survives the process.  Afterward, there is 
> no way to know whether they are intended for 32 or 64 bit environments.
> It seems to be done this way as a step toward building the rpm and deb 
> artifacts.  But the rpms and debs are self-identifying as to the platform 
> they were built for, and contain only one set of libs each, while the binary 
> tarball isn't.  The binary tarball should have the same platform-specific 
> subdirectories that the full tarball does; but this means that the rpm and 
> deb builds have to be more careful about include/exclude specs for what goes 
> into those artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8010) hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present

2012-02-01 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-8010:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch-1 and branch-1.0.

> hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to 
> true and HADOOP_HOME is present
> -
>
> Key: HADOOP-8010
> URL: https://issues.apache.org/jira/browse/HADOOP-8010
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 1.0.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Fix For: 1.0.1
>
> Attachments: HADOOP-8010.patch.txt
>
>
> Running hadoop daemon commands when HADOOP_HOME_WARN_SUPPRESS is set to true 
> and HADOOP_HOME is present produces:
> {noformat}
>   [: 76: true: unexpected operator
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7960) Port HADOOP-5203 to branch-1, build version comparison is too restrictive

2012-01-30 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7960:
---

Target Version/s: 1.0.1  (was: 1.1.0)
   Fix Version/s: (was: 1.1.0)
  1.0.1

Needed in 1.0.1.  Merged to 1.0.1 and changed 1.1.0 Fixed Version and Target 
Version.

> Port HADOOP-5203 to branch-1, build version comparison is too restrictive
> -
>
> Key: HADOOP-7960
> URL: https://issues.apache.org/jira/browse/HADOOP-7960
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giridharan Kesavan
>Assignee: Matt Foley
> Fix For: 1.0.1
>
> Attachments: HADOOP-5203-md5-1.1.patch
>
>
> hadoop services should not be using the build timestamp to verify version 
> difference in the cluster installation. Instead it should use the source 
> checksum as in HADOOP-5203.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7988) Upper case in hostname part of the principals doesn't work with kerberos.

2012-01-21 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7988:
---

Target Version/s: 1.0.1

> Upper case in hostname part of the principals doesn't work with kerberos.
> -
>
> Key: HADOOP-7988
> URL: https://issues.apache.org/jira/browse/HADOOP-7988
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.24.0, 0.23.1, 1.0.0
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HADOOP-7988.branch-1.patch
>
>
> Kerberos doesn't like upper case in the hostname part of the principals.
> This issue has been seen in 23 as well as 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7988) Upper case in hostname part of the principals doesn't work with kerberos.

2012-01-21 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7988:
---

Target Version/s: 0.23.1, 1.0.1  (was: 1.0.1)

> Upper case in hostname part of the principals doesn't work with kerberos.
> -
>
> Key: HADOOP-7988
> URL: https://issues.apache.org/jira/browse/HADOOP-7988
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.24.0, 0.23.1, 1.0.0
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HADOOP-7988.branch-1.patch
>
>
> Kerberos doesn't like upper case in the hostname part of the principals.
> This issue has been seen in 23 as well as 1.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7960) Port HADOOP-5203 to branch-1, build version comparison is too restrictive

2012-01-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7960:
---

Attachment: HADOOP-5203-md5-1.1.patch

This is almost identical to Bill Au's version for v0.20.1, in HADOOP-5203, so 
thanks Bill!

Patch is already in trunk, so no Submit Patch.

> Port HADOOP-5203 to branch-1, build version comparison is too restrictive
> -
>
> Key: HADOOP-7960
> URL: https://issues.apache.org/jira/browse/HADOOP-7960
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giridharan Kesavan
>Assignee: Matt Foley
> Attachments: HADOOP-5203-md5-1.1.patch
>
>
> hadoop services should not be using the build timestamp to verify version 
> difference in the cluster installation. Instead it should use the source 
> checksum as in HADOOP-5203.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-5203) TT's version build is too restrictive

2012-01-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-5203:
---

Attachment: HADOOP-5203-md5-1.1.patch

This is almost identical to Bill Au's version for v0.20.1, so thanks Bill!

> TT's version build is too restrictive
> -
>
> Key: HADOOP-5203
> URL: https://issues.apache.org/jira/browse/HADOOP-5203
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: Runping Qi
>Assignee: Rick Cox
> Fix For: 0.21.0
>
> Attachments: HADOOP-5203-md5-0.18.3.patch, 
> HADOOP-5203-md5-0.19.2.patch, HADOOP-5203-md5-0.20.1.patch, 
> HADOOP-5203-md5-1.1.patch, HADOOP-5203-md5.patch, HADOOP-5203-md5.patch, 
> HADOOP-5203.patch
>
>
> At start time, TT checks whether its version is compatible with JT.
> The condition is too restrictive. 
> It will shut down itself if one of the following conditions fail:
> * the version numbers must match
> * the revision numbers must match
> * the user ids who build the jar must match
> * the build times must match
> I think it should check the major part of the version numbers only (thus any 
> version like 0.19. should be compatible).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7960) Port HADOOP-5203 to branch-1, build version comparison is too restrictive

2012-01-05 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7960:
---

Description: 
hadoop services should not be using the build timestamp to verify version 
difference in the cluster installation. Instead it should use the source 
checksum as in HADOOP-5203.
  

  was:
hadoop services should not be using the build timestamp to verify version 
difference in the cluster installation. Instead it should use the svn revision 
or the git hash.
  

Summary: Port HADOOP-5203 to branch-1, build version comparison is too 
restrictive  (was: svn revision should be used to verify the version difference 
between hadoop services)

> Port HADOOP-5203 to branch-1, build version comparison is too restrictive
> -
>
> Key: HADOOP-7960
> URL: https://issues.apache.org/jira/browse/HADOOP-7960
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giridharan Kesavan
>Assignee: Matt Foley
>
> hadoop services should not be using the build timestamp to verify version 
> difference in the cluster installation. Instead it should use the source 
> checksum as in HADOOP-5203.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7960) Port HADOOP-5203 to branch-1, build version comparison is too restrictive

2012-01-05 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7960:
---

Target Version/s: 1.1.0

> Port HADOOP-5203 to branch-1, build version comparison is too restrictive
> -
>
> Key: HADOOP-7960
> URL: https://issues.apache.org/jira/browse/HADOOP-7960
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Giridharan Kesavan
>Assignee: Matt Foley
>
> hadoop services should not be using the build timestamp to verify version 
> difference in the cluster installation. Instead it should use the source 
> checksum as in HADOOP-5203.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6840) Support non-recursive create() in FileSystem & SequenceFile.Writer

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6840:
---

Target Version/s: 1.0.0, 0.23.1  (was: 1.0.0, 0.23.1, 0.21.1, 0.20-append)

Marking fixed/closed per Dhruba's suggestion.  If anyone wants to work on this 
for other branches, please open a new Jira and link it to this one.

> Support non-recursive create() in FileSystem & SequenceFile.Writer
> --
>
> Key: HADOOP-6840
> URL: https://issues.apache.org/jira/browse/HADOOP-6840
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, io
>Affects Versions: 0.20-append, 0.21.0
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>Priority: Minor
> Fix For: 0.23.1, 1.0.0
>
> Attachments: HADOOP-6840-branch-0.20-security.patch, 
> HADOOP-6840_0.20-append.patch, HADOOP-6840_0.21-2.patch, 
> HADOOP-6840_0.21.patch, hadoop-6840-1.patch, hadoop-6840-2.patch, 
> hadoop-6840-3.patch
>
>
> The proposed solution for HBASE-2312 requires the sequence file to handle a 
> non-recursive create.  This is already supported by HDFS, but needs to have 
> an equivalent FileSystem & SequenceFile.Writer API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6840) Support non-recursive create() in FileSystem & SequenceFile.Writer

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6840:
---

  Resolution: Fixed
Target Version/s: 1.0.0, 0.23.1  (was: 0.23.1, 1.0.0)
  Status: Resolved  (was: Patch Available)

> Support non-recursive create() in FileSystem & SequenceFile.Writer
> --
>
> Key: HADOOP-6840
> URL: https://issues.apache.org/jira/browse/HADOOP-6840
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, io
>Affects Versions: 0.20-append, 0.21.0
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>Priority: Minor
> Fix For: 0.23.1, 1.0.0
>
> Attachments: HADOOP-6840-branch-0.20-security.patch, 
> HADOOP-6840_0.20-append.patch, HADOOP-6840_0.21-2.patch, 
> HADOOP-6840_0.21.patch, hadoop-6840-1.patch, hadoop-6840-2.patch, 
> hadoop-6840-3.patch
>
>
> The proposed solution for HBASE-2312 requires the sequence file to handle a 
> non-recursive create.  This is already supported by HDFS, but needs to have 
> an equivalent FileSystem & SequenceFile.Writer API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6840) Support non-recursive create() in FileSystem & SequenceFile.Writer

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6840:
---

Target Version/s: 0.20-append, 0.21.1, 0.23.1, 1.0.0  (was: 0.21.1, 
0.20-append)

This is fixed in 0.23.1 and 1.0.0, and all their successor branches.
However, it is held open for 0.20-append and 0.21.1.
To the best of my knowledge no one is working on those branches.
Unless someone intends to patch this bug in those branches, can we mark this 
bug fixed?  Thanks.

> Support non-recursive create() in FileSystem & SequenceFile.Writer
> --
>
> Key: HADOOP-6840
> URL: https://issues.apache.org/jira/browse/HADOOP-6840
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, io
>Affects Versions: 0.20-append, 0.21.0
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>Priority: Minor
> Fix For: 0.23.1, 1.0.0
>
> Attachments: HADOOP-6840-branch-0.20-security.patch, 
> HADOOP-6840_0.20-append.patch, HADOOP-6840_0.21-2.patch, 
> HADOOP-6840_0.21.patch, hadoop-6840-1.patch, hadoop-6840-2.patch, 
> hadoop-6840-3.patch
>
>
> The proposed solution for HBASE-2312 requires the sequence file to handle a 
> non-recursive create.  This is already supported by HDFS, but needs to have 
> an equivalent FileSystem & SequenceFile.Writer API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7865) Test Failures in 1.0.0 hdfs/common

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7865:
---

Fix Version/s: 1.0.0

> Test Failures in 1.0.0 hdfs/common
> --
>
> Key: HADOOP-7865
> URL: https://issues.apache.org/jira/browse/HADOOP-7865
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 1.0.0
>
> Attachments: HADOOP-7865-branch-1.patch
>
>
> Following tests in hdfs and common are failing
> 1. TestFileAppend2
> 2. TestFileConcurrentReader
> 3. TestDoAsEffectiveUser 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7810) move hadoop archive to core from tools

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7810:
---

Fix Version/s: (was: 0.24.0)

Since it is fixed in 23.1, there is no need to state 24.0 in the "Fixed 
Version" field, and it confuses the release process.  Removing 24.0 from the 
"Fixed Version" field.

> move hadoop archive to core from tools
> --
>
> Key: HADOOP-7810
> URL: https://issues.apache.org/jira/browse/HADOOP-7810
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: John George
>Assignee: John George
>Priority: Blocker
> Fix For: 0.23.1
>
> Attachments: HADOOP-7810v1-trunk.patch, HADOOP-7810v1-trunk.sh, 
> hadoop-7810.branch-0.20-security.patch, 
> hadoop-7810.branch-0.20-security.patch, hadoop-7810.branch-0.20-security.patch
>
>
> "The HadoopArchieves classes are included in the 
> $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop 
> classpath`.
> A Pig script using HCatalog's dynamic partitioning with HAR enabled will 
> therefore fail if a jar with HAR is not included in the pig call's '-cp' and 
> '-Dpig.additional.jars' arguments."
> I am not aware of any reason to not include hadoop-tools.jar in 'hadoop 
> classpath'. Will attach a patch soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7810) move hadoop archive to core from tools

2011-12-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7810:
---

Target Version/s: 0.23.1  (was: 1.0.0, 0.23.1)

It appears this is no longer being considered for inclusion in branch-1, so 
removing 1.x.x from the "Target Versions" field.

> move hadoop archive to core from tools
> --
>
> Key: HADOOP-7810
> URL: https://issues.apache.org/jira/browse/HADOOP-7810
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: John George
>Assignee: John George
>Priority: Blocker
> Fix For: 0.23.1
>
> Attachments: HADOOP-7810v1-trunk.patch, HADOOP-7810v1-trunk.sh, 
> hadoop-7810.branch-0.20-security.patch, 
> hadoop-7810.branch-0.20-security.patch, hadoop-7810.branch-0.20-security.patch
>
>
> "The HadoopArchieves classes are included in the 
> $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop 
> classpath`.
> A Pig script using HCatalog's dynamic partitioning with HAR enabled will 
> therefore fail if a jar with HAR is not included in the pig call's '-cp' and 
> '-Dpig.additional.jars' arguments."
> I am not aware of any reason to not include hadoop-tools.jar in 'hadoop 
> classpath'. Will attach a patch soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7732) hadoop java docs bad pointer to hdfs package

2011-12-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7732:
---

Priority: Minor  (was: Major)
 Summary: hadoop java docs bad pointer to hdfs package  (was: hadoop java 
docs missing hdfs package)

Note the link to the MapRed package is also bad, and fixed in this patch.

> hadoop java docs bad pointer to hdfs package
> 
>
> Key: HADOOP-7732
> URL: https://issues.apache.org/jira/browse/HADOOP-7732
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.20.204.0, 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Matt Foley
>Priority: Minor
> Attachments: HADOOP-7732.patch
>
>
> the following link 
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/hdfs/package-summary.html
> leads to a 404

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7732) hadoop java docs missing hdfs package

2011-12-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7732:
---

Attachment: HADOOP-7732.patch

Please review proposed patch.

> hadoop java docs missing hdfs package
> -
>
> Key: HADOOP-7732
> URL: https://issues.apache.org/jira/browse/HADOOP-7732
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.20.204.0, 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Matt Foley
> Attachments: HADOOP-7732.patch
>
>
> the following link 
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/hdfs/package-summary.html
> leads to a 404

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7923) Update doc versions from 0.20 to 1.0

2011-12-15 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7923:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0, 0.23.1)
Release Note: Docs version number is now automatically updated by 
reference to the build number.

> Update doc versions from 0.20 to 1.0
> 
>
> Key: HADOOP-7923
> URL: https://issues.apache.org/jira/browse/HADOOP-7923
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.0.0
>
> Attachments: c7923_20111213_branch-1.patch, 
> h2643_20111207_branch-1.patch, h2643_20111213_branch-1.patch
>
>
> The docs version is still 0.20.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7923) Update doc versions from 0.20 to 1.0

2011-12-15 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7923:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0)
   Fix Version/s: 1.0.0

Nicholas, should this be ported to v0.23 too?  Leaving open for now.

> Update doc versions from 0.20 to 1.0
> 
>
> Key: HADOOP-7923
> URL: https://issues.apache.org/jira/browse/HADOOP-7923
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.0.0
>
> Attachments: c7923_20111213_branch-1.patch, 
> h2643_20111207_branch-1.patch, h2643_20111213_branch-1.patch
>
>
> The docs version is still 0.20.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7461) Jackson Dependency Not Declared in Hadoop POM

2011-12-14 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7461:
---

Target Version/s: 1.0.0

Two comments:
First, to request/propose that a bug be fixed in a given release, please add 
that release to the "Target Version" field in the bug report.
Second, if the fix is trivial, please construct a patch for the fix, test it, 
and upload it to the Jira, rather than complaining that no one (else) has fixed 
it in four months.

> Jackson Dependency Not Declared in Hadoop POM
> -
>
> Key: HADOOP-7461
> URL: https://issues.apache.org/jira/browse/HADOOP-7461
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
>Reporter: Ron Bodkin
>Assignee: Giridharan Kesavan
> Attachments: HADOOP-7461.patch
>
>
> (COMMENT: This bug still affects 0.20.205.0, four months after the bug was 
> filed.  This causes total failure, and the fix is trivial for whoever manages 
> the POM -- just add the missing dependency! --ben)
> This issue was identified and the fix & workaround was documented at 
> https://issues.cloudera.org/browse/DISTRO-44
> The issue affects use of Hadoop 0.20.203.0 from the Maven central repo. I 
> built a job using that maven repo and ran it, resulting in this failure:
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/codehaus/jackson/map/JsonMappingException
>   at 
> thinkbig.hadoop.inputformat.TestXmlInputFormat.run(TestXmlInputFormat.java:18)
>   at 
> thinkbig.hadoop.inputformat.TestXmlInputFormat.main(TestXmlInputFormat.java:23)
> Caused by: java.lang.ClassNotFoundException: 
> org.codehaus.jackson.map.JsonMappingException

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7810) move hadoop archive to core from tools

2011-12-07 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7810:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0, 0.23.1)
   Fix Version/s: (was: 1.0.0)

Reverted from 1.0.0 and branch-1, per contributor's request.

> move hadoop archive to core from tools
> --
>
> Key: HADOOP-7810
> URL: https://issues.apache.org/jira/browse/HADOOP-7810
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: John George
>Assignee: John George
>Priority: Blocker
> Attachments: hadoop-7810.branch-0.20-security.patch, 
> hadoop-7810.branch-0.20-security.patch, hadoop-7810.branch-0.20-security.patch
>
>
> "The HadoopArchieves classes are included in the 
> $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop 
> classpath`.
> A Pig script using HCatalog's dynamic partitioning with HAR enabled will 
> therefore fail if a jar with HAR is not included in the pig call's '-cp' and 
> '-Dpig.additional.jars' arguments."
> I am not aware of any reason to not include hadoop-tools.jar in 'hadoop 
> classpath'. Will attach a patch soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7854) UGI getCurrentUser is not synchronized

2011-11-30 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7854:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0, 0.24.0)
   Fix Version/s: 0.23.1

Corrected Fix Versions and Target Versions to correspond to actual commits.

> UGI getCurrentUser is not synchronized
> --
>
> Key: HADOOP-7854
> URL: https://issues.apache.org/jira/browse/HADOOP-7854
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.1, 1.0.0
>
> Attachments: HADOOP-7854-trunk.patch, HADOOP-7854.patch
>
>
> Sporadic {{ConcurrentModificationExceptions}} are originating from 
> {{UGI.getCurrentUser}} when it needs to create a new instance.  The problem 
> was specifically observed in a JT under heavy load when a post-job cleanup is 
> accessing the UGI while a new job is being processed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-30 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Fix Version/s: (was: 1.0.0)

Reopening bug and reverting this patch due to HADOOP-7867.  Need a solution 
that is also compatible with Mac Firefox.  Thanks.

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Assignee: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7848) hadoop rpm/deb binaries does not set the correct ownership and permission for the task-controller

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7848:
---

Target Version/s: 1.1.0  (was: 1.0.0)

> hadoop rpm/deb binaries does not set the correct ownership and permission for 
> the task-controller
> -
>
> Key: HADOOP-7848
> URL: https://issues.apache.org/jira/browse/HADOOP-7848
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Giridharan Kesavan
> Attachments: HADOOP-7848.patch
>
>
> currently the hadoop rpm installs the task-controller binary with the 
> permission
> -rwxr-xr-x 1 root root 39434 Oct  7 06:26 /usr/bin/task-controller
> It should belong to user root and group hadoop with permission 6050

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7850) hadoop rpm does not create the appropriate link for the native files

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7850:
---

Target Version/s: 1.1.0  (was: 1.0.0)

> hadoop rpm does not create the appropriate link for the native files
> 
>
> Key: HADOOP-7850
> URL: https://issues.apache.org/jira/browse/HADOOP-7850
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Giridharan Kesavan
>
> hadoop rpm installs the following files
> /usr/lib/libhadoop.so
> /usr/lib/libhadoop.so.1.0.0
> From the size it looks like the libhadoop.so.1.0.0 points to the 64 bit 
> native files
> ls -latr /usr/lib/libhadoop.so.1.0.0 
> /usr/lib/native/Linux-amd64-64/libhadoop.so
> -rw-r--r-- 1 root root 177483 Oct  7 06:22 
> /usr/lib/native/Linux-amd64-64/libhadoop.so
> -rw-r--r-- 1 root root 177483 Oct  7 06:25 /usr/lib/libhadoop.so.1.0.0
> And the libhadoop.so file points to the 32 bit version
> ls -latr /usr/lib/libhadoop.so /usr/lib/native/Linux-i386-32/libhadoop.so
> -rw-r--r-- 1 root root 160438 Oct  7 06:20 
> /usr/lib/native/Linux-i386-32/libhadoop.so
> -rw-r--r-- 1 root root 160438 Oct  7 06:20 /usr/lib/libhadoop.so
> This causes the 64bit tasktracker to not load the native libraries unless the 
> libhadoop.so file is linked to the libhadoop.so.1.0.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7849) hadoop rpm overwrites the existing /etc/hadoop/hadoop-env.sh

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7849:
---

Target Version/s: 1.1.0  (was: 1.0.0)

> hadoop rpm overwrites the existing /etc/hadoop/hadoop-env.sh
> 
>
> Key: HADOOP-7849
> URL: https://issues.apache.org/jira/browse/HADOOP-7849
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Giridharan Kesavan
>
> The following steps were used to deploy
> 1. Create /etc/hadoop dir
> 2. create a file hadoop-env.sh in the above dir
> 3. install the hadoop rpm (rpm -ivh hadoop-0.20.205.0-1.amd64.rpm)
> The following console output is displayed
> Preparing...### [100%]
>1:hadoop warning: /etc/hadoop/hadoop-env.sh created as 
> /etc/hadoop/hadoop-env.sh.rpmnew
> ### [100%]
> After the install open up the /etc/hadoop/hadoop-env.sh and notice that the 
> file has been replaced.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7809) Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote job submission

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7809:
---

Target Version/s: 1.1.0  (was: 1.0.0)

Received insufficient feedback to make 1.0.0.
Setting Target Version to 1.1.0.

> Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote 
> job submission
> ---
>
> Key: HADOOP-7809
> URL: https://issues.apache.org/jira/browse/HADOOP-7809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: contrib/cloud
>Reporter: Joydeep Sen Sarma
>Assignee: Matt Foley
> Attachments: hadoop-5839.2.patch
>
>
> The fix for HADOOP-5839 was committed to 0.21 more than a year ago.  This bug 
> is to backport the change (which is only 14 lines) to branch-0.20-security.
> ===
> Original description:
> i would very much like the option of submitting jobs from a workstation 
> outside ec2 to a hadoop cluster in ec2. This has been explored here:
> http://www.nabble.com/public-IP-for-datanode-on-EC2-tt19336240.html
> the net result of this is that we can make this work (along with using a 
> socks proxy) with a couple of changes in the ec2 scripts:
> a) use public 'hostname' for fs.default.name setting (instead of the private 
> hostname being used currently)
> b) mark hadoop.rpc.socket.factory.class.default as final variable in the 
> generated hadoop-site.xml (that applies to server side)
> #a has no downside as far as i can tell since public hostnames resolve to 
> internal/private IP addresses within ec2 (so traffic is optimally routed).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7732) hadoop java docs missing hdfs package

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7732:
---

Target Version/s: 1.1.0  (was: 1.0.0)
Assignee: Arpit Gupta

> hadoop java docs missing hdfs package
> -
>
> Key: HADOOP-7732
> URL: https://issues.apache.org/jira/browse/HADOOP-7732
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.20.204.0, 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
>
> the following link 
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/hdfs/package-summary.html
> leads to a 404

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7751) rpm version is not being picked from the -Dversion option in 205

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7751:
---

Target Version/s: 1.1.0  (was: 1.0.0)

> rpm version is not being picked from the -Dversion option in 205
> 
>
> Key: HADOOP-7751
> URL: https://issues.apache.org/jira/browse/HADOOP-7751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Giridharan Kesavan
>
> ran ant build -Dversion=0.20.205.1 and the rpm generated is the following
> hadoop-0.20.205.0-1.amd64.rpm
> Where as the tar.gz has the correct value
> hadoop-0.20.205.1.tar.gz
> the same version string should be applied to tarball and rpms

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7825) Hadoop wrapper script not picking up native libs correctly

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7825:
---

Target Version/s: 1.1.0

Got insufficient feedback on this and HADOOP-6453 to make it into 1.0.0.
Setting target version to 1.1.0.

> Hadoop wrapper script not picking up native libs correctly
> --
>
> Key: HADOOP-7825
> URL: https://issues.apache.org/jira/browse/HADOOP-7825
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.205.0
> Environment: Debian 6.0 x64_64
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>Reporter: stephen mulcahy
>
> Originally discussed in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-user/20.mbox/%3C4EC3A3AE.7060402%40deri.org%3E
> I'm testing out native lib support on our test amd64 test cluster 
> running 0.20.205 running the following
> ./bin/hadoop jar hadoop-test-0.20.205.0.jar testsequencefile -seed 0 
> -count 1000 -compressType RECORD xxx -codec 
> org.apache.hadoop.io.compress.GzipCodec -check 2
> it fails with
> WARN util.NativeCodeLoader: Unable to load native-hadoop library for 
> your platform... using builtin-java classes where applicable
> Looking at
> bin/hadoop
> it seems to successfully detect that the native libs are available (they 
> seem to come pre-compiled with 0.20.205 which is nice)
>if [ -d "${HADOOP_HOME}/lib/native" ]; then
>  if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
>  
> JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:${HADOOP_HOME}/lib/native/${JAVA_PLATFORM}
>  else
>JAVA_LIBRARY_PATH=${HADOOP_HOME}/lib/native/${JAVA_PLATFORM}
>  fi
>fi
> and sets JAVA_LIBRARY_PATH to contain them.
> Then in the following line, if ${HADOOP_HOME}/lib contains libhadoop.a 
> (which is seems to in the stock tar) then it proceeds to ignore the 
> native libs
>if [ -e "${HADOOP_PREFIX}/lib/libhadoop.a" ]; then
>  JAVA_LIBRARY_PATH=${HADOOP_PREFIX}/lib
>fi
> The libhadoop.a in ${HADOOP_HOME}/lib seems to be a copy of the 
> lib/native/Linux-i386-32 going from the filesizes (and also noted by 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-user/20.mbox/%3ccaocnvr2azudnn0lfhmtqumayujytvhfkmmm_j0r-bmxw2wu...@mail.gmail.com%3E)
> hadoop@testhbase01:~$ ls -la hadoop/lib/libhadoop.*
> -rw-r--r-- 1 hadoop hadoop 237244 Oct  7 08:20 hadoop/lib/libhadoop.a
> -rw-r--r-- 1 hadoop hadoop877 Oct  7 08:20 hadoop/lib/libhadoop.la
> -rw-r--r-- 1 hadoop hadoop 160438 Oct  7 08:20 hadoop/lib/libhadoop.so
> -rw-r--r-- 1 hadoop hadoop 160438 Oct  7 08:19 hadoop/lib/libhadoop.so.1.0.0
> hadoop@testhbase01:~$ ls -la hadoop/lib/native/Linux-i386-32/
> total 728
> drwxr-xr-x 3 hadoop hadoop   4096 Nov 15 14:05 .
> drwxr-xr-x 5 hadoop hadoop   4096 Oct  7 08:24 ..
> -rw-r--r-- 1 hadoop hadoop 237244 Oct  7 08:20 libhadoop.a
> -rw-r--r-- 1 hadoop hadoop877 Oct  7 08:20 libhadoop.la
> -rw-r--r-- 1 hadoop hadoop 160438 Oct  7 08:20 libhadoop.so
> -rw-r--r-- 1 hadoop hadoop 160438 Oct  7 08:20 libhadoop.so.1
> -rw-r--r-- 1 hadoop hadoop 160438 Oct  7 08:20 libhadoop.so.1.0.0
> A possible solution includes removing libhadoop.a and friends from 
> ${HADOOP_HOME}/lib and possibly also modifying the hadoop wrapper to remove 
>if [ -e "${HADOOP_PREFIX}/lib/libhadoop.a" ]; then
>  JAVA_LIBRARY_PATH=${HADOOP_PREFIX}/lib
>fi
> unless there is some other reason for this to exist.
> This was also noted in a comment to HADOOP-6453

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6453) Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH

2011-11-28 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6453:
---

Target Version/s: 0.22.0, 1.1.0  (was: 0.22.0, 1.0.0)

Got insufficient feedback on this and HADOOP-7825 to make it into 1.0.0.
Setting target version to 1.1.0.

> Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH
> 
>
> Key: HADOOP-6453
> URL: https://issues.apache.org/jira/browse/HADOOP-6453
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Chad Metcalf
>Assignee: Matt Foley
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HADOOP-6453-0.20.patch, HADOOP-6453-0.20v2.patch, 
> HADOOP-6453-0.20v3.patch, HADOOP-6453-trunkv2.patch, 
> HADOOP-6453-trunkv3.patch, HADOOP-6453.trunk.patch
>
>
> Currently the hadoop wrapper script assumes its the only place that uses 
> JAVA_LIBRARY_PATH and initializes it to a blank line.
> JAVA_LIBRARY_PATH=''
> This prevents anyone from setting this outside of the hadoop wrapper (say 
> hadoop-config.sh) for their own native libraries.
> The fix is pretty simple. Don't initialize it to '' and append the native 
> libs like normal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Target Version/s: 1.0.0  (was: 0.23.1, 1.0.0)
   Fix Version/s: (was: 0.23.1)

This bug records the resolution for pre-split branches.
The sub-task bugs record the resolution for post-split branches/trunk.

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Assignee: Dave Vronay
>Priority: Trivial
> Fix For: 1.0.0
>
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7715) see log4j Error when running mr jobs and certain dfs calls

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7715:
---

Fix Version/s: (was: 1.0.0)
   0.20.205.0
   0.23.0

corrected Fix Versions field, based on commit dates.

> see log4j Error when running mr jobs and certain dfs calls
> --
>
> Key: HADOOP-7715
> URL: https://issues.apache.org/jira/browse/HADOOP-7715
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Arpit Gupta
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7715-trunk.patch, HADOOP-7715.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7711) hadoop-env.sh generated from templates has duplicate info

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7711:
---

Fix Version/s: (was: 1.0.0)
   0.20.205.0
   0.23.0

Corrected Fix Versions field, based on commit dates.

> hadoop-env.sh generated from templates has duplicate info
> -
>
> Key: HADOOP-7711
> URL: https://issues.apache.org/jira/browse/HADOOP-7711
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7711.20s.patch, HADOOP-7711.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7853) multiple javax security configurations cause conflicts

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7853:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0)
   Fix Version/s: 1.0.0

> multiple javax security configurations cause conflicts
> --
>
> Key: HADOOP-7853
> URL: https://issues.apache.org/jira/browse/HADOOP-7853
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 1.0.0
>
> Attachments: HADOOP-7853-1.patch, HADOOP-7853-1.patch, 
> HADOOP-7853.patch
>
>
> Both UGI and the SPNEGO KerberosAuthenticator set the global javax security 
> configuration.  SPNEGO stomps on UGI's security config which leads to 
> kerberos/SASL authentication errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7804) enable hadoop config generator to set dfs.block.local-path-access.user to enable short circuit read

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7804:
---

   Resolution: Fixed
Fix Version/s: 1.0.0
   Status: Resolved  (was: Patch Available)

> enable hadoop config generator to set dfs.block.local-path-access.user to 
> enable short circuit read
> ---
>
> Key: HADOOP-7804
> URL: https://issues.apache.org/jira/browse/HADOOP-7804
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.20.205.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 1.0.0
>
> Attachments: HADOOP-7804.branch-0.20-security.patch, 
> HADOOP-7804.branch-0.20-security.patch, 
> HADOOP-7804.branch-0.20-security.patch, HADOOP-7804.patch, HADOOP-7804.patch, 
> HADOOP-7804.patch
>
>
> we have a new config that allows to select which user can have access for 
> short circuit read. We should make that configurable through the config 
> generator scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7810) move hadoop archive to core from tools

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7810:
---

Target Version/s: 0.23.1, 1.0.0  (was: 1.0.0)

bq. @John George: Adding trunk and 0.23 in the 'Affects version' since we need 
to make sure the behavior is similar in these as well.

I think this was intended to be added to the "Target Versions" list.
Only adding 0.23.1 since also adding it to trunk is implicit.

> move hadoop archive to core from tools
> --
>
> Key: HADOOP-7810
> URL: https://issues.apache.org/jira/browse/HADOOP-7810
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0, 0.23.1, 1.0.0
>Reporter: John George
>Assignee: John George
>Priority: Blocker
> Fix For: 1.0.0
>
> Attachments: hadoop-7810.branch-0.20-security.patch, 
> hadoop-7810.branch-0.20-security.patch, hadoop-7810.branch-0.20-security.patch
>
>
> "The HadoopArchieves classes are included in the 
> $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop 
> classpath`.
> A Pig script using HCatalog's dynamic partitioning with HAR enabled will 
> therefore fail if a jar with HAR is not included in the pig call's '-cp' and 
> '-Dpig.additional.jars' arguments."
> I am not aware of any reason to not include hadoop-tools.jar in 'hadoop 
> classpath'. Will attach a patch soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6840) Support non-recursive create() in FileSystem & SequenceFile.Writer

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6840:
---

Target Version/s: 0.20-append, 0.21.1, 1.0.0  (was: 1.0.0)
   Fix Version/s: (was: 0.21.1)
  (was: 0.20-append)

Adding 0.20-append and 0.21.1 to the Target Versions list, per Harsh's request 
of 18/Jun/11.
Removing same from the Fix Versions list, as no commits have been recorded for 
those branches.

> Support non-recursive create() in FileSystem & SequenceFile.Writer
> --
>
> Key: HADOOP-6840
> URL: https://issues.apache.org/jira/browse/HADOOP-6840
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, io
>Affects Versions: 0.20-append, 0.21.0
>Reporter: Nicolas Spiegelberg
>Assignee: Jitendra Nath Pandey
>Priority: Minor
> Fix For: 1.0.0
>
> Attachments: HADOOP-6840-branch-0.20-security.patch, 
> HADOOP-6840_0.20-append.patch, HADOOP-6840_0.21-2.patch, 
> HADOOP-6840_0.21.patch
>
>
> The proposed solution for HBASE-2312 requires the sequence file to handle a 
> non-recursive create.  This is already supported by HDFS, but needs to have 
> an equivalent FileSystem & SequenceFile.Writer API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7854) UGI getCurrentUser is not synchronized

2011-11-27 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7854:
---

   Resolution: Fixed
Fix Version/s: 1.0.0
   Status: Resolved  (was: Patch Available)

Marking this fixed in 1.0.0 (0.20.205.1), because this patch will prevent the 
problem from recurring.

If further disposition is required (such as opening bug against JDK), please do 
so.

Should this be merged to trunk?

> UGI getCurrentUser is not synchronized
> --
>
> Key: HADOOP-7854
> URL: https://issues.apache.org/jira/browse/HADOOP-7854
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: HADOOP-7854.patch
>
>
> Sporadic {{ConcurrentModificationExceptions}} are originating from 
> {{UGI.getCurrentUser}} when it needs to create a new instance.  The problem 
> was specifically observed in a JT under heavy load when a post-job cleanup is 
> accessing the UGI while a new job is being processed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-23 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

  Resolution: Fixed
   Fix Version/s: 0.23.1
  0.20.205.1
Target Version/s: 0.20.205.1, 0.23.1  (was: 0.23.1, 0.20.205.1)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.20.205.1, branch-20-security, 0.23.1, and trunk.

Thanks, Dave!

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Assignee: Dave Vronay
>Priority: Trivial
> Fix For: 0.20.205.1, 0.23.1
>
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Target Version/s: 0.20.205.1, 0.23.1  (was: 0.23.1, 0.20.205.1)
  Status: Patch Available  (was: Open)

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Attachment: (was: HADOOP-7827.patch)

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-22 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Attachment: HADOOP-7827.patch
HADOOP-7827.patch

Hey Dave, there's a secret to submitting both patches: if you submit the one 
for trunk LAST, then the test-patch robot will run correctly.  (It currently 
doesn't support running non-trunk patches.) Re-submitting the trunk patch to 
re-run test-patch.

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7816) Allow HADOOP_HOME deprecated warning suppression based on config specified in hadoop-env.sh

2011-11-21 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7816:
---

Target Version/s: 0.20.205.1  (was: 0.20.205.1, 0.20.205.0)
   Fix Version/s: (was: 0.20.206.0)

Corrected the Target and Fixed fields.

> Allow HADOOP_HOME deprecated warning suppression based on config specified in 
> hadoop-env.sh
> ---
>
> Key: HADOOP-7816
> URL: https://issues.apache.org/jira/browse/HADOOP-7816
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Fix For: 0.20.205.1
>
> Attachments: rel-0.20.205.0-rc1_hadoop7816.patch
>
>
> Move suppression check for "Warning: $HADOOP_HOME is deprecated"  to after 
> sourcing of hadoop-env.sh so that people can set HADOOP_HOME_WARN_SUPPRESS 
> inside the config.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2011-11-21 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7827:
---

Target Version/s: 0.20.205.1, 0.23.1

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Priority: Trivial
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-4012) Providing splitting support for bzip2 compressed files

2011-11-14 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-4012:
---

Target Version/s:   (was: 0.20.206.0)

Since this bug has been closed and cannot be reopened, I've created a new bug 
for the port to branch-0.20-security, HADOOP-7823.  Please submit a patch to 
that Jira and we'll get it reviewed.  Thanks.

> Providing splitting support for bzip2 compressed files
> --
>
> Key: HADOOP-4012
> URL: https://issues.apache.org/jira/browse/HADOOP-4012
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Abdul Qadeer
>Assignee: Abdul Qadeer
> Fix For: 0.21.0
>
> Attachments: C4012-12.patch, C4012-13.patch, C4012-14.patch, 
> Hadoop-4012-version1.patch, Hadoop-4012-version10.patch, 
> Hadoop-4012-version11.patch, Hadoop-4012-version2.patch, 
> Hadoop-4012-version3.patch, Hadoop-4012-version4.patch, 
> Hadoop-4012-version5.patch, Hadoop-4012-version6.patch, 
> Hadoop-4012-version7.patch, Hadoop-4012-version8.patch, 
> Hadoop-4012-version9.patch
>
>
> Hadoop assumes that if the input data is compressed, it can not be split 
> (mainly due to the limitation of many codecs that they need the whole input 
> stream to decompress successfully).  So in such a case, Hadoop prepares only 
> one split per compressed file, where the lower split limit is at 0 while the 
> upper limit is the end of the file.  The consequence of this decision is 
> that, one compress file goes to a single mapper. Although it circumvents the 
> limitation of codecs (as mentioned above) but reduces the parallelism 
> substantially, as it was possible otherwise in case of splitting.
> BZip2 is a compression / De-Compression algorithm which does compression on 
> blocks of data and later these compressed blocks can be decompressed 
> independent of each other.  This is indeed an opportunity that instead of one 
> BZip2 compressed file going to one mapper, we can process chunks of file in 
> parallel.  The correctness criteria of such a processing is that for a bzip2 
> compressed file, each compressed block should be processed by only one mapper 
> and ultimately all the blocks of the file should be processed.  (By 
> processing we mean the actual utilization of that un-compressed data (coming 
> out of the codecs) in a mapper).
> We are writing the code to implement this suggested functionality.  Although 
> we have used bzip2 as an example, but we have tried to extend Hadoop's 
> compression interfaces so that any other codecs with the same capability as 
> that of bzip2, could easily use the splitting support.  The details of these 
> changes will be posted when we submit the code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7823) port HADOOP-4012 to branch-0.20-security

2011-11-14 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7823:
---

Fix Version/s: (was: 0.20.206.0)

> port HADOOP-4012 to branch-0.20-security
> 
>
> Key: HADOOP-7823
> URL: https://issues.apache.org/jira/browse/HADOOP-7823
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.20.205.0
>Reporter: Tim Broberg
>
> Please see HADOOP-4012 - Providing splitting support for bzip2 compressed 
> files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7810) add hadoop-tools.jar to 'hadoop classpath'

2011-11-10 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7810:
---

Target Version/s: 0.20.205.1

When targeting a patch for a given release, please list that release in the 
"Target Version" field.  Thank you.

> add hadoop-tools.jar to 'hadoop classpath'
> --
>
> Key: HADOOP-7810
> URL: https://issues.apache.org/jira/browse/HADOOP-7810
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.20.205.1
>Reporter: John George
>Assignee: John George
>Priority: Blocker
> Attachments: hadoop-7810.branch-0.20-security.patch
>
>
> "The HadoopArchieves classes are included in the 
> $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop 
> classpath`.
> A Pig script using HCatalog's dynamic partitioning with HAR enabled will 
> therefore fail if a jar with HAR is not included in the pig call's '-cp' and 
> '-Dpig.additional.jars' arguments."
> I am not aware of any reason to not include hadoop-tools.jar in 'hadoop 
> classpath'. Will attach a patch soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7724) hadoop-setup-conf.sh should put proxy user info into the core-site.xml

2011-11-09 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7724:
---

Target Version/s: 0.20.205.0  (was: 0.20.205.1)

> hadoop-setup-conf.sh should put proxy user info into the core-site.xml 
> ---
>
> Key: HADOOP-7724
> URL: https://issues.apache.org/jira/browse/HADOOP-7724
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Giridharan Kesavan
>Assignee: Arpit Gupta
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7724.branch-0.20-security.patch, 
> HADOOP-7724.patch, HADOOP-7724.patch
>
>
> proxy user info should go to the core-site.xml instead of the hdfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7723) Automatically generate good Release Notes

2011-11-09 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7723:
---

Target Version/s: 0.20.206.0, 0.23.0  (was: 0.23.0, 0.20.205.0)

> Automatically generate good Release Notes
> -
>
> Key: HADOOP-7723
> URL: https://issues.apache.org/jira/browse/HADOOP-7723
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.20.204.0, 0.23.0
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> In branch-0.20-security, there is a tool src/docs/relnotes.py, that 
> automatically generates Release Notes.  Fix deficiencies and port it up to 
> trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6453) Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH

2011-11-09 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6453:
---

Attachment: HADOOP-6453-0.20v3.patch

Re-based to branch-0.20-security-205.  Won't mark "Patch Available" as isn't 
for trunk.

Please review.

> Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH
> 
>
> Key: HADOOP-6453
> URL: https://issues.apache.org/jira/browse/HADOOP-6453
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Chad Metcalf
>Assignee: Matt Foley
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HADOOP-6453-0.20.patch, HADOOP-6453-0.20v2.patch, 
> HADOOP-6453-0.20v3.patch, HADOOP-6453-trunkv2.patch, 
> HADOOP-6453-trunkv3.patch, HADOOP-6453.trunk.patch
>
>
> Currently the hadoop wrapper script assumes its the only place that uses 
> JAVA_LIBRARY_PATH and initializes it to a blank line.
> JAVA_LIBRARY_PATH=''
> This prevents anyone from setting this outside of the hadoop wrapper (say 
> hadoop-config.sh) for their own native libraries.
> The fix is pretty simple. Don't initialize it to '' and append the native 
> libs like normal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6886) LocalFileSystem Needs createNonRecursive API

2011-11-09 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6886:
---

Target Version/s: 0.20-append, 0.20.205.1  (was: 0.20.205.1)
   Fix Version/s: (was: 0.20-append)

corrected Target Version / Fix Version field usage.

> LocalFileSystem Needs createNonRecursive API
> 
>
> Key: HADOOP-6886
> URL: https://issues.apache.org/jira/browse/HADOOP-6886
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20-append
>Reporter: Nicolas Spiegelberg
>Priority: Minor
> Fix For: 0.20.205.1
>
> Attachments: HADOOP-6886-branch-0.20-security.patch, 
> HADOOP-6886-branch-0.20-security.patch, HADOOP-6886_20-append.patch
>
>
> While running sanity check tests for HBASE-2312, I noticed that HDFS-617 did 
> not include createNonRecursive() support for the LocalFileSystem.  This is a 
> problem for HBase, which allows the user to run over the LocalFS instead of 
> HDFS for local cluster testing.  I think this only affects 0.20-append, but 
> may affect the trunk based upon how exactly FileContext handles non-recursive 
> creates.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7809) Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote job submission

2011-11-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7809:
---

Attachment: (was: 5839.1.patch)

> Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote 
> job submission
> ---
>
> Key: HADOOP-7809
> URL: https://issues.apache.org/jira/browse/HADOOP-7809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: contrib/cloud
>Reporter: Joydeep Sen Sarma
>Assignee: Matt Foley
> Attachments: hadoop-5839.2.patch
>
>
> The fix for HADOOP-5839 was committed to 0.21 more than a year ago.  This bug 
> is to backport the change (which is only 14 lines) to branch-0.20-security.
> ===
> Original description:
> i would very much like the option of submitting jobs from a workstation 
> outside ec2 to a hadoop cluster in ec2. This has been explored here:
> http://www.nabble.com/public-IP-for-datanode-on-EC2-tt19336240.html
> the net result of this is that we can make this work (along with using a 
> socks proxy) with a couple of changes in the ec2 scripts:
> a) use public 'hostname' for fs.default.name setting (instead of the private 
> hostname being used currently)
> b) mark hadoop.rpc.socket.factory.class.default as final variable in the 
> generated hadoop-site.xml (that applies to server side)
> #a has no downside as far as i can tell since public hostnames resolve to 
> internal/private IP addresses within ec2 (so traffic is optimally routed).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7809) Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote job submission

2011-11-08 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7809:
---

 Description: 
The fix for HADOOP-5839 was committed to 0.21 more than a year ago.  This bug 
is to backport the change (which is only 14 lines) to branch-0.20-security.
===
Original description:
i would very much like the option of submitting jobs from a workstation outside 
ec2 to a hadoop cluster in ec2. This has been explored here:

http://www.nabble.com/public-IP-for-datanode-on-EC2-tt19336240.html

the net result of this is that we can make this work (along with using a socks 
proxy) with a couple of changes in the ec2 scripts:
a) use public 'hostname' for fs.default.name setting (instead of the private 
hostname being used currently)
b) mark hadoop.rpc.socket.factory.class.default as final variable in the 
generated hadoop-site.xml (that applies to server side)

#a has no downside as far as i can tell since public hostnames resolve to 
internal/private IP addresses within ec2 (so traffic is optimally routed).

  was:
i would very much like the option of submitting jobs from a workstation outside 
ec2 to a hadoop cluster in ec2. This has been explored here:

http://www.nabble.com/public-IP-for-datanode-on-EC2-tt19336240.html

the net result of this is that we can make this work (along with using a socks 
proxy) with a couple of changes in the ec2 scripts:
a) use public 'hostname' for fs.default.name setting (instead of the private 
hostname being used currently)
b) mark hadoop.rpc.socket.factory.class.default as final variable in the 
generated hadoop-site.xml (that applies to server side)

#a has no downside as far as i can tell since public hostnames resolve to 
internal/private IP addresses within ec2 (so traffic is optimally routed).

Target Version/s: 0.20.205.1
   Fix Version/s: (was: 0.21.0)
Assignee: Matt Foley  (was: Joydeep Sen Sarma)
Hadoop Flags:   (was: Reviewed)

> Backport HADOOP-5839 to 0.20-security - fixes to ec2 scripts to allow remote 
> job submission
> ---
>
> Key: HADOOP-7809
> URL: https://issues.apache.org/jira/browse/HADOOP-7809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: contrib/cloud
>Reporter: Joydeep Sen Sarma
>Assignee: Matt Foley
> Attachments: 5839.1.patch, hadoop-5839.2.patch
>
>
> The fix for HADOOP-5839 was committed to 0.21 more than a year ago.  This bug 
> is to backport the change (which is only 14 lines) to branch-0.20-security.
> ===
> Original description:
> i would very much like the option of submitting jobs from a workstation 
> outside ec2 to a hadoop cluster in ec2. This has been explored here:
> http://www.nabble.com/public-IP-for-datanode-on-EC2-tt19336240.html
> the net result of this is that we can make this work (along with using a 
> socks proxy) with a couple of changes in the ec2 scripts:
> a) use public 'hostname' for fs.default.name setting (instead of the private 
> hostname being used currently)
> b) mark hadoop.rpc.socket.factory.class.default as final variable in the 
> generated hadoop-site.xml (that applies to server side)
> #a has no downside as far as i can tell since public hostnames resolve to 
> internal/private IP addresses within ec2 (so traffic is optimally routed).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-11-01 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7730:
---

Target Version/s: 0.20.206.0, 0.22.0

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.20.205.0, 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7723) Automatically generate good Release Notes

2011-10-31 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7723:
---

 Description: In branch-0.20-security, there is a tool 
src/docs/relnotes.py, that automatically generates Release Notes.  Fix 
deficiencies and port it up to trunk.  (was: The current tool for generating 
release notes, relnotes-gen.py, describes all the Jiras fixed in the release.  
Jiras with non-empty "Release Note" field show the release note, others show 
the Description field.  They are sorted in reverse-numerical order.  I propose 
the following changes:

# List the jiras with Release Notes first.  These are usually the larger or 
incompatible changes that most readers will care about most.  Then list the 
other jiras with their descriptions.
# Sort in forward numerical order.
# Limit description lengths to 500 characters, but print the full Release Notes 
for any jira that has them.
# Generate lists of jiras by combining info from the Jira database (Fixed jiras 
with Fixed Version of X.Y.Z release) and the CHANGES.txt file (additions since 
the last release).  This will accomodate jiras that have been fixed in the 
current new release, but can't be marked Resolved due to being pending on other 
branch(es).)
Target Version/s: 0.20.205.0, 0.23.0  (was: 0.23.0, 0.20.205.0)
 Summary: Automatically generate good Release Notes  (was: Modify 
relnotes-gen.py to list "Release Notes" separately from "Other Issues Fixed", 
and other improvements)

Many of our peer projects generate trivial "release notes" as a list of bugs 
fixed, giving the bug number and one-line description, for instance as 
auto-generated by Jira under:
bq.   Project > Road Map (or Change Log) > Release Notes
e.g., for 0.20.205.0:
bq.   
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310942&version=12316392
 

However, the Hadoop project has tried to do better than this, by actually 
collecting "Release Note" field values from fixed bugs (or Description fields 
from bugs with empty Release Note fields), and presenting them in the form of 
the releasenotes.html document template, e.g., for 0.20.205.0:
bq.   http://hadoop.apache.org/common/docs/r0.20.205.0/releasenotes.html

When doing the notes for 0.20.205.0, I found that the tool for doing these 
collected Release Notes (src/docs/relnotes.py) was broken in a couple of 
respects:
* It was inconsistent with the documented process in HowToRelease, because it 
wanted bug lists piped in somewhat differently.
* It assumed that Jira's report on "Resolved" bugs was sufficient, while that 
list often differs somewhat from CHANGES.txt.  In particular, bugs held open 
for ports to other branches would not be reported as Resolved in the current 
branch.
* Most critically, the feature to extract the "Release Note" field from jira 
issues doesn't work unless the person running it has top-level Jira admin privs 
(not just admin privs for the Hadoop projects).  This restriction is built in 
to the Jira CLI tool ('jira.sh').

I fixed these issues, and will submit the improved tool for review.  It now 
does the following:
* Query Jira for bugs resolved in the current release.
* Query CHANGES.txt for bugs resolved in the current release.
* Merge and diff the two lists, reporting the result and giving the Release 
Manager an opportunity to resolve the variances.
* Look up the Release Note field for each resolved bug, scraping it from a 
'curl' call rather than the admin-restricted Jira CLI tool.
* If there is no Release Note, use the Description field but limit it to the 
first 500 characters, in case the Description is long.
* Format as before.

I also suggest these enhancements:
* List the jiras with Release Notes first.  These are usually the larger or 
incompatible changes that most readers will care about most.  Then list the 
other jiras with their descriptions.
* Sort in forward numerical order, instead of reverse.


> Automatically generate good Release Notes
> -
>
> Key: HADOOP-7723
> URL: https://issues.apache.org/jira/browse/HADOOP-7723
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.20.204.0, 0.23.0
>Reporter: Matt Foley
>Assignee: Matt Foley
>
> In branch-0.20-security, there is a tool src/docs/relnotes.py, that 
> automatically generates Release Notes.  Fix deficiencies and port it up to 
> trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6882) Update the patch level of Jetty

2011-10-19 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6882:
---

Target Version/s: 0.20.206.0, 0.23.0  (was: 0.20.206.0)

Was the intended fix version 0.23.0 rather than 0.20.3?

> Update the patch level of Jetty
> ---
>
> Key: HADOOP-6882
> URL: https://issues.apache.org/jira/browse/HADOOP-6882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.20.3
>
> Attachments: h-6882-20.patch, h-6882-c.patch, h-6882-h.patch, 
> h-6882-mr.patch
>
>
> I'd like to move to a newer patch level of Jetty. 6.1.23 (instead of our 
> current 6.1.14) has been suggested. As seen in 
> http://svn.codehaus.org/jetty/jetty/branches/jetty-6.1/VERSION.txt, that 
> represents 18 months of bug fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6882) Update the patch level of Jetty

2011-10-19 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6882:
---

Target Version/s: 0.20.206.0  (was: 0.20.205.0)

> Update the patch level of Jetty
> ---
>
> Key: HADOOP-6882
> URL: https://issues.apache.org/jira/browse/HADOOP-6882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.20.3
>
> Attachments: h-6882-20.patch, h-6882-c.patch, h-6882-h.patch, 
> h-6882-mr.patch
>
>
> I'd like to move to a newer patch level of Jetty. 6.1.23 (instead of our 
> current 6.1.14) has been suggested. As seen in 
> http://svn.codehaus.org/jetty/jetty/branches/jetty-6.1/VERSION.txt, that 
> represents 18 months of bug fixes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7748) Print exception message when failed to move to trash.

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7748:
---

Target Version/s: 0.20.206.0
   Fix Version/s: (was: 0.20.205.0)

> Print exception message when  failed to move to trash.
> --
>
> Key: HADOOP-7748
> URL: https://issues.apache.org/jira/browse/HADOOP-7748
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.204.0
>Reporter: Liyin Liang
>Priority: Trivial
> Attachments: 7748.diff
>
>
> When failed to move to trash, the client should print exception message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7661) FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn't have an authority.

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7661:
---

Target Version/s: 0.20.205.0, 0.23.0

> FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
> doesn't have an authority.
> -
>
> Key: HADOOP-7661
> URL: https://issues.apache.org/jira/browse/HADOOP-7661
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7661.20s.1.patch, HADOOP-7661.20s.2.patch, 
> HADOOP-7661.20s.3.patch
>
>
> FileSystem.getCanonicalServiceName throws NPE for any file system uri that 
> doesn't have an authority. 
> 
> java.lang.NullPointerException
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:138)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:261)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:174)
> 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7615) Binary layout does not put share/hadoop/contrib/*.jar into the class path

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7615:
---

Target Version/s: 0.20.205.0, 0.23.0

> Binary layout does not put share/hadoop/contrib/*.jar into the class path
> -
>
> Key: HADOOP-7615
> URL: https://issues.apache.org/jira/browse/HADOOP-7615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java, Linux
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7615.patch
>
>
> For contrib projects, contrib jar files are not included in HADOOP_CLASSPATH 
> in the binary layout.  Several projects jar files should be copied to 
> $HADOOP_PREFIX/share/hadoop/lib for binary deployment.  The interesting jar 
> files to include in $HADOOP_PREFIX/share/hadoop/lib are: capacity-scheduler, 
> thriftfs, fairscheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7602:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

If trunk versions are no longer target for this jira, it can be resolved.

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7602.patch, hadoop-7602.1.patch, 
> hadoop-7602.2.patch, hadoop-7602.3.patch, hadoop-7602.4.patch, 
> hadoop-7602.6.patch, hadoop-7602.7.patch, hadoop-7602.8.patch, 
> hadoop-7602.9.patch, hadoop-7602.daryns_comment.2.patch, 
> hadoop-7602.daryns_comment.patch, hadoop-7602.patch, 
> hadoop-7602.trunk.1.patch, hadoop-7602.trunk.2.patch, 
> hadoop-7602.trunk.patch, hadoop-7602.trunk.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7548) ant binary target fails if native has not been built

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7548:
---

Target Version/s: 0.20.206.0
   Fix Version/s: (was: 0.20.205.0)

> ant binary target fails if native has not been built
> 
>
> Key: HADOOP-7548
> URL: https://issues.apache.org/jira/browse/HADOOP-7548
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
>Reporter: Eli Collins
>
> The "binary" target on branch-0.20-security fails with the following, it 
> assumes the native dir exists.
> BUILD FAILED
> /home/eli/src/hadoop-branch-0.20-security/build.xml:1572: 
> /home/eli/src/hadoop-branch-0.20-security/build/hadoop-0.20.206.0-SNAPSHOT/native
>  not found.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7297) Error in the documentation regarding Checkpoint/Backup Node

2011-10-18 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7297:
---

Target Version/s: 0.20.206.0
   Fix Version/s: (was: 0.20.205.0)

> Error in the documentation regarding Checkpoint/Backup Node
> ---
>
> Key: HADOOP-7297
> URL: https://issues.apache.org/jira/browse/HADOOP-7297
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.20.203.0
>Reporter: arnaud p
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 0.20.203.1
>
> Attachments: hadoop-7297.patch, hadoop-7297.patch
>
>
> On 
> http://hadoop.apache.org/common/docs/r0.20.203.0/hdfs_user_guide.html#Checkpoint+Node:
>  the command bin/hdfs namenode -checkpoint required to launch the 
> backup/checkpoint node does not exist.
> I have removed this from the docs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7603) Set default hdfs, mapred uid, and hadoop group gid for RPM packages

2011-10-11 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7603:
---

Release Note: Set hdfs uid, mapred uid, and hadoop gid to fixed numbers 
(201, 202, and 123, respectively).  (was: Set hdfs, mapred uid, and hadoop uid 
to fixed numbers. (Eric Yang))

> Set default hdfs, mapred uid, and hadoop group gid for RPM packages
> ---
>
> Key: HADOOP-7603
> URL: https://issues.apache.org/jira/browse/HADOOP-7603
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.23.0
> Environment: Java, Redhat EL, Ubuntu
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7603-trunk.patch, HADOOP-7603.patch
>
>
> Hadoop rpm package creates hdfs, mapped users, and hadoop group for 
> automatically setting up pid directory and log directory with proper 
> permission.  The default headless users should have a fixed uid, and gid 
> numbers defined.
> Searched through the standard uid and gid on both Redhat and Debian distro.  
> It looks like:
> {noformat}
> uid: 201 for hdfs
> uid: 202 for mapred
> gid: 49 for hadoop
> {noformat}
> would be free for use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7724) hadoop-setup-conf.sh should put proxy user info into the core-site.xml

2011-10-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7724:
---

Fix Version/s: 0.23.0
   0.20.205.0

> hadoop-setup-conf.sh should put proxy user info into the core-site.xml 
> ---
>
> Key: HADOOP-7724
> URL: https://issues.apache.org/jira/browse/HADOOP-7724
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Giridharan Kesavan
>Assignee: Arpit Gupta
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7724.branch-0.20-security.patch, 
> HADOOP-7724.patch, HADOOP-7724.patch
>
>
> proxy user info should go to the core-site.xml instead of the hdfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7721) dfs.web.authentication.kerberos.principal expects the full hostname and does not replace _HOST with the hostname

2011-10-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7721:
---

Target Version/s: 0.20.205.0, 0.24.0  (was: 0.24.0, 0.20.205.1)

> dfs.web.authentication.kerberos.principal expects the full hostname and does 
> not replace _HOST with the hostname
> 
>
> Key: HADOOP-7721
> URL: https://issues.apache.org/jira/browse/HADOOP-7721
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7721-20s.1.patch, 
> HADOOP-7721-branch-0.20-security.patch, HADOOP-7721-branch-0.20-security.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7707) improve config generator to allow users to specify proxy user, turn append on or off, turn webhdfs on or off

2011-10-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7707:
---

 Target Version/s: 0.20.205.0, 0.23.0  (was: 0.23.0, 0.20.205.0)
Affects Version/s: 0.23.0
Fix Version/s: 0.23.0
   0.20.205.0

> improve config generator to allow users to specify proxy user, turn append on 
> or off, turn webhdfs on or off
> 
>
> Key: HADOOP-7707
> URL: https://issues.apache.org/jira/browse/HADOOP-7707
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7707-1.patch, HADOOP-7707.20s-1.patch, 
> HADOOP-7707.20s-2.patch, HADOOP-7707.20s-3.patch, HADOOP-7707.20s.patch, 
> HADOOP-7707.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7721) dfs.web.authentication.kerberos.principal expects the full hostname and does not replace _HOST with the hostname

2011-10-06 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7721:
---

Target Version/s: 0.20.205.1, 0.24.0  (was: 0.20.205.1)
   Fix Version/s: (was: 0.24.0)
  (was: 0.20.206.0)

Bug is still open for trunk fix.

> dfs.web.authentication.kerberos.principal expects the full hostname and does 
> not replace _HOST with the hostname
> 
>
> Key: HADOOP-7721
> URL: https://issues.apache.org/jira/browse/HADOOP-7721
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.24.0
>Reporter: Arpit Gupta
>Assignee: Jitendra Nath Pandey
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7721-20s.1.patch, 
> HADOOP-7721-branch-0.20-security.patch, HADOOP-7721-branch-0.20-security.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7602) wordcount, sort etc on har files fails with NPE

2011-10-01 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7602:
---

 Target Version/s: 0.20.205.0, 0.24.0
Affects Version/s: (was: 0.20.206.0)
   0.23.0
Fix Version/s: (was: 0.24.0)

Hi John, please bring the trunk patch up to snuff so it will pass test-patch.  
Thanks.

> wordcount, sort etc on har files fails with NPE
> ---
>
> Key: HADOOP-7602
> URL: https://issues.apache.org/jira/browse/HADOOP-7602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 0.23.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.20.205.0
>
> Attachments: hadoop-7602.1.patch, hadoop-7602.2.patch, 
> hadoop-7602.3.patch, hadoop-7602.4.patch, hadoop-7602.6.patch, 
> hadoop-7602.7.patch, hadoop-7602.8.patch, hadoop-7602.9.patch, 
> hadoop-7602.daryns_comment.2.patch, hadoop-7602.daryns_comment.patch, 
> hadoop-7602.patch, hadoop-7602.trunk.1.patch, hadoop-7602.trunk.patch, 
> hadoop-7602.trunk.patch
>
>
> wordcount, sort etc on har files fails with 
> NPE@createSocketAddr(NetUtils.java:137). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7400) HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set

2011-10-01 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-7400:
---

Affects Version/s: (was: 0.20.206.0)
   0.20.205.0
Fix Version/s: (was: 0.20.206.0)
   0.20.205.0

Corrected version - this jira was actually fixed in 0.20.205.0.

> HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set 
> ---
>
> Key: HADOOP-7400
> URL: https://issues.apache.org/jira/browse/HADOOP-7400
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7400.patch, HADOOP-7400.patch
>
>
> HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set a dir 
> other than build dir
> test-junit:
>  [copy] Copying 1 file to 
> /home/y/var/builds/thread2/workspace/Cloud-Hadoop-0.20.1xx-Secondary/src/contrib/hdfsproxy/src/test/resources/proxy-config
> [junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
> [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
> [junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6889) Make RPC to have an option to timeout

2011-10-01 Thread Matt Foley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6889:
---

Target Version/s: 0.20-append, 0.20.205.0, 0.22.0, 0.23.0
   Fix Version/s: (was: 0.20-append)

If no one is going to port this to 0.20-append, we should remove that version 
from the "Target Versions" list and close this jira.

> Make RPC to have an option to timeout
> -
>
> Key: HADOOP-6889
> URL: https://issues.apache.org/jira/browse/HADOOP-6889
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 0.22.0
>Reporter: Hairong Kuang
>Assignee: John George
> Fix For: 0.20.205.0, 0.22.0, 0.23.0
>
> Attachments: HADOOP-6889-for-20security.patch, 
> HADOOP-6889-for20.2.patch, HADOOP-6889-for20.3.patch, 
> HADOOP-6889-for20.patch, HADOOP-6889-fortrunk-2.patch, 
> HADOOP-6889-fortrunk.patch, HADOOP-6889.patch, ipcTimeout.patch, 
> ipcTimeout1.patch, ipcTimeout2.patch
>
>
> Currently Hadoop RPC does not timeout when the RPC server is alive. What it 
> currently does is that a RPC client sends a ping to the server whenever a 
> socket timeout happens. If the server is still alive, it continues to wait 
> instead of throwing a SocketTimeoutException. This is to avoid a client to 
> retry when a server is busy and thus making the server even busier. This 
> works great if the RPC server is NameNode.
> But Hadoop RPC is also used for some of client to DataNode communications, 
> for example, for getting a replica's length. When a client comes across a 
> problematic DataNode, it gets stuck and can not switch to a different 
> DataNode. In this case, it would be better that the client receives a timeout 
> exception.
> I plan to add a new configuration ipc.client.max.pings that specifies the max 
> number of pings that a client could try. If a response can not be received 
> after the specified max number of pings, a SocketTimeoutException is thrown. 
> If this configuration property is not set, a client maintains the current 
> semantics, waiting forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >