[jira] [Commented] (HADOOP-8777) Retrieve job id on execution of a job

2012-09-11 Thread Nelson Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453783#comment-13453783
 ] 

Nelson Paul commented on HADOOP-8777:
-

I'm working on a different aspect that is, trying to run a job(jar file) in a 
remote server using a Java program.

Currently am doing this using the SSH method and parse the output 
stream(console) to read the job id. Hadoop can provide an API for retrieving 
job id on executing a job(jar).

> Retrieve job id on execution of a job
> -
>
> Key: HADOOP-8777
> URL: https://issues.apache.org/jira/browse/HADOOP-8777
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nelson Paul
>Priority: Minor
>
> A method to retrieve the job id on submitting the job (using a JAVA client 
> program) will do a lot. It will be easier to track the job if the job id and 
> job can be linked in some way. Currently there is no direct way to identify 
> the job id of a particular job.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453660#comment-13453660
 ] 

Hadoop QA commented on HADOOP-8787:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544748/HADOOP-8787-2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1439//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1439//console

This message is automatically generated.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-2951) contrib package provides a utility to build or update an index

2012-09-11 Thread primo.w.liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

primo.w.liu updated HADOOP-2951:


Description: 
This contrib package provides a utility to build or update an index
using Map/Reduce.

A distributed "index" is partitioned into "shards". Each shard corresponds
to a Lucene instance. org.apache.hadoop.contrib.index.main.UpdateIndex
contains the main() method which uses a Map/Reduce job to analyze documents
and update Lucene instances in parallel.

The Map phase of the Map/Reduce job formats, analyzes and parses the input
(in parallel), while the Reduce phase collects and applies the updates to
each Lucene instance (again in parallel). The updates are applied using the
local file system where a Reduce task runs and then copied back to HDFS.
For example, if the updates caused a new Lucene segment to be created, the
new segment would be created on the local file system first, and then
copied back to HDFS.

When the Map/Reduce job completes, a "new version" of the index is ready
to be queried. It is important to note that the new version of the index
is not derived from scratch. By leveraging Lucene's update algorithm, the
new version of each Lucene instance will share as many files as possible
as the previous version.

The main() method in UpdateIndex requires the following information for
updating the shards:
  - Input formatter. This specifies how to format the input documents.
  - Analysis. This defines the analyzer to use on the input. The analyzer
determines whether a document is being inserted, updated, or deleted.
For inserts or updates, the analyzer also converts each input document
into a Lucene document.
  - Input paths. This provides the location(s) of updated documents,
e.g., HDFS files or directories, or HBase tables.
  - Shard paths, or index path with the number of shards. Either specify
the path for each shard, or specify an index path and the shards are
the sub-directories of the index directory.
  - Output path. When the update to a shard is done, a message is put here.
  - Number of map tasks.

All of the information can be specified in a configuration file. All but
the first two can also be specified as command line options. Check out
conf/index-config.xml.template for other configurable parameters.

Note: Because of the parallel nature of Map/Reduce, the behaviour of
multiple inserts, deletes or updates to the same document is undefined.

  was:
This contrib package provides a utility to build or update an index
using Map/Reduce.

A distributed "index" is partitioned into "shards". Each shard corresponds
to a Lucene instance. org.apache.hadoop.contrib.index.main.UpdateIndex
contains the main() method which uses a Map/Reduce job to any6talyze documents
and update Lucene instances in parallel.

The Map phase of the Map/Reduce job formats, analyzes and parses the input
(in parallel), while the Reduce phase collects and applies the updates to
each Lucene instance (again in parallel). The updates are applied using the
local file system where a Reduce task runs and then copied back to HDFS.
For example, if the updates caused a new Lucene segment to be created, the
new segment would be created on the local file system first, and then
copied back to HDFS.

When the Map/Reduce job completes, a "new version" of the index is ready
to be queried. It is important to note that the new version of the index
is not derived from scratch. By leveraging Lucene's update algorithm, the
new version of each Lucene instance will share as many files as possible
as the previous version.

The main() method in UpdateIndex requires the following information for
updating the shards:
  - Input formatter. This specifies how to format the input documents.
  - Analysis. This defines the analyzer to use on the input. The analyzer
determines whether a document is being inserted, updated, or deleted.
For inserts or updates, the analyzer also converts each input document
into a Lucene document.
  - Input paths. This provides the location(s) of updated documents,
e.g., HDFS files or directories, or HBase tables.
  - Shard paths, or index path with the number of shards. Either specify
the path for each shard, or specify an index path and the shards are
the sub-directories of the index directory.
  - Output path. When the update to a shard is done, a message is put here.
  - Number of map tasks.

All of the information can be specified in a configuration file. All but
the first two can also be specified as command line options. Check out
conf/index-config.xml.template for other configurable parameters.

Note: Because of the parallel nature of Map/Reduce, the behaviour of
multiple inserts, deletes or updates to the same document is undefined.


> contrib package provides a utility to build or update an index
A contrib package to update an index using Map/Reduce
> -

[jira] [Updated] (HADOOP-2951) contrib package provides a utility to build or update an index

2012-09-11 Thread primo.w.liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

primo.w.liu updated HADOOP-2951:


Description: 
This contrib package provides a utility to build or update an index
using Map/Reduce.

A distributed "index" is partitioned into "shards". Each shard corresponds
to a Lucene instance. org.apache.hadoop.contrib.index.main.UpdateIndex
contains the main() method which uses a Map/Reduce job to any6talyze documents
and update Lucene instances in parallel.

The Map phase of the Map/Reduce job formats, analyzes and parses the input
(in parallel), while the Reduce phase collects and applies the updates to
each Lucene instance (again in parallel). The updates are applied using the
local file system where a Reduce task runs and then copied back to HDFS.
For example, if the updates caused a new Lucene segment to be created, the
new segment would be created on the local file system first, and then
copied back to HDFS.

When the Map/Reduce job completes, a "new version" of the index is ready
to be queried. It is important to note that the new version of the index
is not derived from scratch. By leveraging Lucene's update algorithm, the
new version of each Lucene instance will share as many files as possible
as the previous version.

The main() method in UpdateIndex requires the following information for
updating the shards:
  - Input formatter. This specifies how to format the input documents.
  - Analysis. This defines the analyzer to use on the input. The analyzer
determines whether a document is being inserted, updated, or deleted.
For inserts or updates, the analyzer also converts each input document
into a Lucene document.
  - Input paths. This provides the location(s) of updated documents,
e.g., HDFS files or directories, or HBase tables.
  - Shard paths, or index path with the number of shards. Either specify
the path for each shard, or specify an index path and the shards are
the sub-directories of the index directory.
  - Output path. When the update to a shard is done, a message is put here.
  - Number of map tasks.

All of the information can be specified in a configuration file. All but
the first two can also be specified as command line options. Check out
conf/index-config.xml.template for other configurable parameters.

Note: Because of the parallel nature of Map/Reduce, the behaviour of
multiple inserts, deletes or updates to the same document is undefined.

  was:
This contrib package provides a utility to build or update an index
using Map/Reduce.

A distributed "index" is partitioned into "shards". Each shard corresponds
to a Lucene instance. org.apache.hadoop.contrib.index.main.UpdateIndex
contains the main() method which uses a Map/Reduce job to analyze documents
and update Lucene instances in parallel.

The Map phase of the Map/Reduce job formats, analyzes and parses the input
(in parallel), while the Reduce phase collects and applies the updates to
each Lucene instance (again in parallel). The updates are applied using the
local file system where a Reduce task runs and then copied back to HDFS.
For example, if the updates caused a new Lucene segment to be created, the
new segment would be created on the local file system first, and then
copied back to HDFS.

When the Map/Reduce job completes, a "new version" of the index is ready
to be queried. It is important to note that the new version of the index
is not derived from scratch. By leveraging Lucene's update algorithm, the
new version of each Lucene instance will share as many files as possible
as the previous version.

The main() method in UpdateIndex requires the following information for
updating the shards:
  - Input formatter. This specifies how to format the input documents.
  - Analysis. This defines the analyzer to use on the input. The analyzer
determines whether a document is being inserted, updated, or deleted.
For inserts or updates, the analyzer also converts each input document
into a Lucene document.
  - Input paths. This provides the location(s) of updated documents,
e.g., HDFS files or directories, or HBase tables.
  - Shard paths, or index path with the number of shards. Either specify
the path for each shard, or specify an index path and the shards are
the sub-directories of the index directory.
  - Output path. When the update to a shard is done, a message is put here.
  - Number of map tasks.

All of the information can be specified in a configuration file. All but
the first two can also be specified as command line options. Check out
conf/index-config.xml.template for other configurable parameters.

Note: Because of the parallel nature of Map/Reduce, the behaviour of
multiple inserts, deletes or updates to the same document is undefined.


> contrib package provides a utility to build or update an index
A contrib package to update an index using Map/Reduce
> -

[jira] [Commented] (HADOOP-7139) Allow appending to existing SequenceFiles

2012-09-11 Thread Keith Wyss (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453657#comment-13453657
 ] 

Keith Wyss commented on HADOOP-7139:


Looking at this patch, it looks like a bunch of bookkeeping about compression 
metadata and support for not initializing the file with the typical 
SequenceFile header. Am I reading it correctly? Will this apply cleanly to 
#"CDH3U[45]"? Anyone tested it on those systems? Thank you.

> Allow appending to existing SequenceFiles
> -
>
> Key: HADOOP-7139
> URL: https://issues.apache.org/jira/browse/HADOOP-7139
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0
>Reporter: Stephen Rose
>Assignee: Stephen Rose
>Priority: Minor
> Attachments: HADOOP-7139-kt.patch, HADOOP-7139.patch, 
> HADOOP-7139.patch, HADOOP-7139.patch, HADOOP-7139.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8518) SPNEGO client side should use KerberosName rules

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reassigned HADOOP-8518:
---

Assignee: Suresh Srinivas  (was: Alejandro Abdelnur)

> SPNEGO client side should use KerberosName rules
> 
>
> Key: HADOOP-8518
> URL: https://issues.apache.org/jira/browse/HADOOP-8518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Suresh Srinivas
>
> currently KerberosName is used only on the server side to resolve the client 
> name, we should use it on the client side as well to resolve the server name 
> before getting the kerberos ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8518) SPNEGO client side should use KerberosName rules

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8518:


Fix Version/s: (was: 2.0.2-alpha)
   (was: 1.1.0)

> SPNEGO client side should use KerberosName rules
> 
>
> Key: HADOOP-8518
> URL: https://issues.apache.org/jira/browse/HADOOP-8518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> currently KerberosName is used only on the server side to resolve the client 
> name, we should use it on the client side as well to resolve the server name 
> before getting the kerberos ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8516) fsck command does not work when executed on Windows Hadoop installation

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8516:


Fix Version/s: (was: 1.1.0)

> fsck command does not work when executed on Windows Hadoop installation
> ---
>
> Key: HADOOP-8516
> URL: https://issues.apache.org/jira/browse/HADOOP-8516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Trupti Dhavle
>
> I tried to run following command on Windows Hadoop installation
> hadoop fsck /tmp
> THis command was run as Administrator. 
> The command fails with following error-
> 12/06/20 00:24:55 ERROR security.UserGroupInformation: 
> PriviledgedActionExceptio
> n as:Administrator cause:java.net.ConnectException: Connection refused: 
> connect
> Exception in thread "main" java.net.ConnectException: Connection refused: 
> connec
> t
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
> at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211)
> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
> at java.net.Socket.connect(Socket.java:529)
> at java.net.Socket.connect(Socket.java:478)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
> at sun.net.www.http.HttpClient.(HttpClient.java:233)
> at sun.net.www.http.HttpClient.New(HttpClient.java:306)
> at sun.net.www.http.HttpClient.New(HttpClient.java:323)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC
> onnection.java:970)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne
> ction.java:911)
> at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection
> .java:836)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon
> nection.java:1172)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:141)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:110)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
> tion.java:1103)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:110)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:182)
> /tmp is owned by Administrator
> hadoop fs -ls /
> Found 3 items
> drwxr-xr-x   - Administrator supergroup  0 2012-06-08 15:08 
> /benchmarks
> drwxrwxrwx   - Administrator supergroup  0 2012-06-11 23:00 /tmp
> drwxr-xr-x   - Administrator supergroup  0 2012-06-19 17:01 /user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8421) Verify and fix build of c++ targets in Hadoop on Windows

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8421:


Fix Version/s: (was: 1.1.0)

> Verify and fix build of c++ targets in Hadoop on Windows
> 
>
> Key: HADOOP-8421
> URL: https://issues.apache.org/jira/browse/HADOOP-8421
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bikas Saha
>
> There are a bunch of c++ files that are not compiled by default for legacy 
> reasons. They represent important functionality. We need to make sure they 
> build on Windows.
> There is some dependency on autoconf/autoreconf.
> HADOOP-8368 ideas could be used in here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8420) saveVersions.sh not working on Windows

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8420:


Fix Version/s: (was: 1.1.0)

> saveVersions.sh not working on Windows
> --
>
> Key: HADOOP-8420
> URL: https://issues.apache.org/jira/browse/HADOOP-8420
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bikas Saha
>
> This script is executed during build time to generate version number 
> information for Hadoop core. This version number is consumed via API's by 
> Hive etc to determine compatibility with Hadoop versions. Currently, because 
> of dependency on awk, cut etc utilities, this script does not run 
> successfully and version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8250) Investigate uses of FileUtil and functional correctness based on current use cases

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8250:


Fix Version/s: (was: 1.1.0)

> Investigate uses of FileUtil and functional correctness based on current use 
> cases
> --
>
> Key: HADOOP-8250
> URL: https://issues.apache.org/jira/browse/HADOOP-8250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 1.1.0
>Reporter: Bikas Saha
>Assignee: Bikas Saha
>
> The current Windows patch replaces symlink with copy. This jira tracks 
> understanding the implications of this change and others like it on expected 
> functionality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7675) Ant option to run disabled kerberos authentication tests.

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7675:


Target Version/s: 1.1.0
   Fix Version/s: (was: 1.1.0)

> Ant option to run disabled kerberos authentication tests.
> -
>
> Key: HADOOP-7675
> URL: https://issues.apache.org/jira/browse/HADOOP-7675
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Jitendra Nath Pandey
>
> The kerberos tests, TestKerberosAuthenticator and 
> TestKerberosAuthenticationHandler, are disabled using @Ignore. A better 
> approach would be to have an ant option to run them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7672) TestKerberosAuthenticator should be disabled in 20 branch.

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7672:


Fix Version/s: (was: 1.1.0)
   (was: 0.20.205.0)

> TestKerberosAuthenticator should be disabled in 20 branch.
> --
>
> Key: HADOOP-7672
> URL: https://issues.apache.org/jira/browse/HADOOP-7672
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
>
> TestKerberosAuthenticator is disabled in trunk. It should be disabled in 20 
> also. 
> It is not expected to pass in unit tests because it tries real kerberos login 
> and expects a valid keytab.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7573) hadoop should log configuration reads

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7573:


Target Version/s: 1.1.0
   Fix Version/s: (was: 1.1.0)

> hadoop should log configuration reads
> -
>
> Key: HADOOP-7573
> URL: https://issues.apache.org/jira/browse/HADOOP-7573
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.20.203.0
>Reporter: Ari Rabkin
>Assignee: Ari Rabkin
>Priority: Minor
> Attachments: HADOOP-7573.patch, HADOOP-7573.patch, HADOOP-7573.patch, 
> HADOOP-7573.patch
>
>
> For debugging, it would often be valuable to know which configuration options 
> ever got read out of the Configuration into the rest of the program -- an 
> unread option didn't cause a problem. This patch logs the first time each 
> option is read.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HADOOP-8787:


Attachment: HADOOP-8787-2.patch

This is an implementation of option 1.

I need to review with the hadoop team tomorrow to ask for direction.  I'm not 
sure which option is best for this JIRA.

Things included in this patch:
1. Nested exception message that give the user information about which 
proporties are causing the problem.
2. It fixed the security null in the case of a prefix bug.
3. It includes a new test for the security null in the case of a prefix bug. 

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453583#comment-13453583
 ] 

Ted Malaska commented on HADOOP-8787:
-

No option 2 is no good it will require more wide changes.  This being my first 
JIRA I wish to keep the changes to a minimal. 

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453575#comment-13453575
 ] 

Ted Malaska commented on HADOOP-8787:
-

Hmm, this is an interesting JIRA.  The problem is really simple, but being my 
first Hadoop JIRA I'm not sure the right course of action.

I have the following options:
1. Throw nested exceptions:  The outer exception will have the prefix 
"dfs.web.authentication.kerberos.principal" and the inner exception will have 
the root config "kerberos.principal".  I know if I was a user I wouldn't like 
this option.

2. Instead of striping off the prefixes I could pass the prefix into the init 
method of the AuthenicationHandler.  That way I would have the full string to 
build the original exception message.  However I assume someone found value in 
striping off those prefixes. 

3. I could pass in the prefix to the handler so the solo reason of constructing 
the exception message.  This option doesn't smell right.

4. I could read the exception message in the AuthenticationFilter and add the 
prefix, but that seems like a hack.

So I think I'm going to go with option 2.  If anyone can think of a better 
option place let me know.




> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453557#comment-13453557
 ] 

Ted Malaska commented on HADOOP-8787:
-

Confirmed through junits that the trunk code will give a random secret when 
ever a config.prefix is used.

I will include that fix into my patch

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453531#comment-13453531
 ] 

Ted Malaska commented on HADOOP-8787:
-

Thank you Todd, I didn't see that.  I will get an updated patch soon.

Also after reviewing the init method in AuthenticationFilter, I have a question 
about line 154

Line 154 looks like it will never return the config value for SIGNATURE_SECRET. 
 Because it follows line 129.

   129 Properties config = getConfiguration(configPrefix, filterConfig);

   154 String signatureSecret = config.getProperty(configPrefix + 
SIGNATURE_SECRET);

I'm going to make a test to check to see if signature secret is getting 
populated in the case of a prefix.
   

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453442#comment-13453442
 ] 

Todd Lipcon commented on HADOOP-8787:
-

Thanks for looking at this, Ted. I don't think the patch is quite sufficient, 
because the variables you've interpolated are missing the 'prefix' that is 
actually in the hadoop configuration. ie it will just print 
"kerberos.principal" is missing, rather than the full one like 
"dfs.web.authentication.kerberos.principal". You'll have to plumb the prefix 
through from AuthenticationFilter somehow to get the proper error message.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453434#comment-13453434
 ] 

Hudson commented on HADOOP-8597:


Integrated in Hadoop-Mapreduce-trunk-Commit #2743 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2743/])
HADOOP-8597. Permit FsShell's text command to read Avro files.  Contributed 
by Ivan Vladimirov. (Revision 1383607)

 Result = FAILURE
cutting : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383607
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453423#comment-13453423
 ] 

Ted Malaska commented on HADOOP-8787:
-

(8 to 1) That's a B+.

I didn't make a junit because I just changed an exception message.  Let me know 
if you want me to make a test for this.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453422#comment-13453422
 ] 

Hadoop QA commented on HADOOP-8787:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544705/HADOOP-8787-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1438//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1438//console

This message is automatically generated.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8787:
---

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

Marking patch available for Ted so that test-patch runs.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.1-alpha, 1.0.3, 3.0.0
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8787:
---

Assignee: Ted Malaska

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HADOOP-8787:


Attachment: HADOOP-8787-1.patch

Added a single quote that I missed. 

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8476) Remove duplicate VM arguments for hadoop deamon

2012-09-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453400#comment-13453400
 ] 

Arpit Gupta commented on HADOOP-8476:
-

Vinay could you regenerate the patch, it does not apply on trunk.

Also some comments/questions based on your patch file

You have added an option to the hadoop-config.sh script to skip hadoop opts 
"--skip_hadoop_opts" and you are passing that in all the various places 
hadoop-config.sh is called and this skipping the setting of HADOOP_OPTS.

I dont think we should make the assumption that people will have the 
appropriate values set in the env by the hadoop-env.sh config file. People 
change this config based on what their needs are and we cannot force them to 
have all of these defined. hadoop-config.sh made sure certain defaults are set.

> Remove duplicate VM arguments for hadoop deamon
> ---
>
> Key: HADOOP-8476
> URL: https://issues.apache.org/jira/browse/HADOOP-8476
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Vinay
>Assignee: Vinay
>Priority: Minor
> Attachments: HADOOP-8476.patch, HADOOP-8476.patch
>
>
> remove duplicate VM arguments passed to hadoop daemon
> Following are the VM arguments currently duplicated.
> {noformat}-Dproc_namenode
> -Xmx1000m
> -Djava.net.preferIPv4Stack=true
> -Xmx128m
> -Xmx128m
> -Dhadoop.log.dir=/home/nn2/logs
> -Dhadoop.log.file=hadoop-root-namenode-HOST-xx-xx-xx-105.log
> -Dhadoop.home.dir=/home/nn2/
> -Dhadoop.id.str=root
> -Dhadoop.root.logger=INFO,RFA
> -Dhadoop.policy.file=hadoop-policy.xml
> -Djava.net.preferIPv4Stack=true
> -Dhadoop.security.logger=INFO,RFAS
> -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS
> -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS
> -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS{noformat}
>  
> In above VM argumants -Xmx1000m will be Overridden by -Xmx128m.
> BTW Other duplicate arguments wont harm

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-11 Thread Ted Malaska (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Malaska updated HADOOP-8787:


Attachment: HADOOP-8787-0.patch

Change exception message for to include the name of the property that is 
missing.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453399#comment-13453399
 ] 

Hudson commented on HADOOP-8597:


Integrated in Hadoop-Common-trunk-Commit #2719 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2719/])
HADOOP-8597. Permit FsShell's text command to read Avro files.  Contributed 
by Ivan Vladimirov. (Revision 1383607)

 Result = SUCCESS
cutting : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383607
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Doug Cutting (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Cutting updated HADOOP-8597:
-

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   Status: Resolved  (was: Patch Available)

I just committed this.  Thanks, Ivan!

> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453395#comment-13453395
 ] 

Hudson commented on HADOOP-8597:


Integrated in Hadoop-Hdfs-trunk-Commit #2782 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2782/])
HADOOP-8597. Permit FsShell's text command to read Avro files.  Contributed 
by Ivan Vladimirov. (Revision 1383607)

 Result = SUCCESS
cutting : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383607
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-11 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453391#comment-13453391
 ] 

Todd Lipcon commented on HADOOP-8781:
-

bq. Be aware that this change will likely have side-effects for non-Java code.

Maybe you can elaborate what the negative side effects would be?

> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453386#comment-13453386
 ] 

Allen Wittenauer commented on HADOOP-8781:
--

(and yes, I'd -1 this if anyone outside your hallway was given a chance to 
review stuff before it got committed.)

> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453384#comment-13453384
 ] 

Allen Wittenauer commented on HADOOP-8781:
--

Be aware that this change will likely have side-effects for non-Java code.

> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453345#comment-13453345
 ] 

Hudson commented on HADOOP-8767:


Integrated in Hadoop-Mapreduce-trunk-Commit #2742 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2742/])
HADOOP-8767. Secondary namenode is started on slave nodes instead of master 
nodes. Contributed by Giovanni Delussu. (Revision 1383560)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383560
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh


> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: 1.2.0, 3.0.0
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8767.
-

   Resolution: Fixed
Fix Version/s: (was: 1.0.3)
   (was: site)
   3.0.0
   1.2.0
 Hadoop Flags: Reviewed

+1 for the patch. Thank you Giovanni for reporting and fixing the issue. Thank 
you Arpit for the review.

> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: 1.2.0, 3.0.0
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453316#comment-13453316
 ] 

Hudson commented on HADOOP-8767:


Integrated in Hadoop-Hdfs-trunk-Commit #2781 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2781/])
HADOOP-8767. Secondary namenode is started on slave nodes instead of master 
nodes. Contributed by Giovanni Delussu. (Revision 1383560)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383560
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh


> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453315#comment-13453315
 ] 

Hudson commented on HADOOP-8767:


Integrated in Hadoop-Common-trunk-Commit #2718 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2718/])
HADOOP-8767. Secondary namenode is started on slave nodes instead of master 
nodes. Contributed by Giovanni Delussu. (Revision 1383560)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383560
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh


> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8767:


Status: In Progress  (was: Patch Available)

> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reassigned HADOOP-8767:
---

Assignee: giovanni delussu

Giovanni, I added you as a Hadoop Common contributor and assigned the jira to 
you.

> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453292#comment-13453292
 ] 

Hudson commented on HADOOP-8789:


Integrated in Hadoop-Mapreduce-trunk-Commit #2741 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2741/])
HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR. Contributed 
by Andy Isaacson (Revision 1383494)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383494
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/test/java/org/apache/hadoop/tools/TestHadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453257#comment-13453257
 ] 

Arpit Gupta commented on HADOOP-8767:
-

+1 the latest changes look good to go.

> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453240#comment-13453240
 ] 

Hudson commented on HADOOP-8789:


Integrated in Hadoop-Common-trunk-Commit #2717 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2717/])
HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR. Contributed 
by Andy Isaacson (Revision 1383494)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383494
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/test/java/org/apache/hadoop/tools/TestHadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453239#comment-13453239
 ] 

Hudson commented on HADOOP-8789:


Integrated in Hadoop-Hdfs-trunk-Commit #2780 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2780/])
HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR. Contributed 
by Andy Isaacson (Revision 1383494)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383494
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/test/java/org/apache/hadoop/tools/TestHadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-11 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8789:


  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.2-alpha)
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Andy.

> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-11 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins moved HDFS-3911 to HADOOP-8789:
---

  Component/s: (was: test)
   test
 Target Version/s: 2.0.2-alpha  (was: 2.0.3-alpha)
Affects Version/s: (was: 2.0.1-alpha)
   2.0.1-alpha
   Issue Type: Improvement  (was: Bug)
  Key: HADOOP-8789  (was: HDFS-3911)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Doug Cutting (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453179#comment-13453179
 ] 

Doug Cutting commented on HADOOP-8597:
--

Ivan, patches are normally against trunk.  After they're committed to trunk 
they may be backported to a branch.

http://wiki.apache.org/hadoop/HowToContribute

This patch should probably be committed to trunk and to branch-2 with 
fix-version 2.0.3-alpha.


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-11 Thread Ivan Vladimirov Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453169#comment-13453169
 ] 

Ivan Vladimirov Ivanov commented on HADOOP-8597:


Sorry for the inconvenience that applying my patch caused. Since I am new to 
the project I was unsure against which version (or branch) to create the patch 
- so I chose "release-2.0.0-alpha". It seemed to most closely match the 
"Affects Version/s" field. In retrospect the choice was probably a mistake. To 
avoid such problems in the future, I would like to ask the following question - 
Should patches be created against the first branch with a version number 
greater or equal to that in the "Affects Version/s" field ("branch-2.0.1-alpha" 
in the current case) or if the version is new enough to directly use the trunk.

Thank you for taking the time to review my patch. I hope that it will be useful 
and would be very happy if it gets committed.


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-09-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453116#comment-13453116
 ] 

Daryn Sharp commented on HADOOP-8783:
-

Yes, after the client changes, all combinations of secure/insecure client & 
server and the resulting auth can be easily tested.

> Improve RPC.Server's digest auth
> 
>
> Key: HADOOP-8783
> URL: https://issues.apache.org/jira/browse/HADOOP-8783
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-8783.patch, HADOOP-8783.patch
>
>
> RPC.Server should always allow digest auth (tokens) if a secret manager if 
> present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-09-11 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453093#comment-13453093
 ] 

Kihwal Lee commented on HADOOP-8783:


+1 (non-binding) Looks good to me. I hope better testing will be added with the 
client-side changes.

> Improve RPC.Server's digest auth
> 
>
> Key: HADOOP-8783
> URL: https://issues.apache.org/jira/browse/HADOOP-8783
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-8783.patch, HADOOP-8783.patch
>
>
> RPC.Server should always allow digest auth (tokens) if a secret manager if 
> present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8788) hadoop fs -ls can print file paths according to the native ls command

2012-09-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453085#comment-13453085
 ] 

Daryn Sharp commented on HADOOP-8788:
-

Yes, the ls command currently works more like find.  It'll be an incompatible 
change which is why I didn't fix it during the shell overhaul, but I think it 
would be a good change so long as a find command is also added.

> hadoop fs -ls can print file paths according to the native ls command
> -
>
> Key: HADOOP-8788
> URL: https://issues.apache.org/jira/browse/HADOOP-8788
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Hemanth Yamijala
>Priority: Minor
>
> hadoop fs -ls dirname lists the paths in the following manner:
> dirname\file1
> dirname\file2
> Basically, dirname is repeated. This is slightly confusing because you get an 
> impression that there's a dirname directory under the specified input
> In contrast, the native ls command doesn't do this.
> When given a glob as input, the native ls command prints it out as follows:
> dirname1:
> file1
> dirname2:
> file1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8788) hadoop fs -ls can print file paths according to the native ls command

2012-09-11 Thread Hemanth Yamijala (JIRA)
Hemanth Yamijala created HADOOP-8788:


 Summary: hadoop fs -ls can print file paths according to the 
native ls command
 Key: HADOOP-8788
 URL: https://issues.apache.org/jira/browse/HADOOP-8788
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Hemanth Yamijala
Priority: Minor


hadoop fs -ls dirname lists the paths in the following manner:

dirname\file1
dirname\file2

Basically, dirname is repeated. This is slightly confusing because you get an 
impression that there's a dirname directory under the specified input

In contrast, the native ls command doesn't do this.

When given a glob as input, the native ls command prints it out as follows:

dirname1:
file1

dirname2:
file1




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453033#comment-13453033
 ] 

Hudson commented on HADOOP-8786:


Integrated in Hadoop-Mapreduce-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1193/])
HADOOP-8786. HttpServer continues to start even if AuthenticationFilter 
fails to init. Contributed by Todd Lipcon. (Revision 1383254)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383254
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java


> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453032#comment-13453032
 ] 

Hudson commented on HADOOP-8781:


Integrated in Hadoop-Mapreduce-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1193/])
HADOOP-8781. hadoop-config.sh should add JAVA_LIBRARY_PATH to 
LD_LIBRARY_PATH. (tucu) (Revision 1383142)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383142
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8783) Improve RPC.Server's digest auth

2012-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453026#comment-13453026
 ] 

Hadoop QA commented on HADOOP-8783:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544639/HADOOP-8783.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1437//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1437//console

This message is automatically generated.

> Improve RPC.Server's digest auth
> 
>
> Key: HADOOP-8783
> URL: https://issues.apache.org/jira/browse/HADOOP-8783
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-8783.patch, HADOOP-8783.patch
>
>
> RPC.Server should always allow digest auth (tokens) if a secret manager if 
> present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8780) Update DeprecatedProperties apt file

2012-09-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453019#comment-13453019
 ] 

Tom White commented on HADOOP-8780:
---

I generated the documentation and it looked fine. +1 pending Jenkins. 

> Update DeprecatedProperties apt file
> 
>
> Key: HADOOP-8780
> URL: https://issues.apache.org/jira/browse/HADOOP-8780
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Radwan
>Assignee: Ahmed Radwan
> Attachments: HADOOP-8780.patch, HADOOP-8780_rev2.patch
>
>
> The current list of deprecated properties is not up-to-date. I'll will upload 
> a patch momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8783) Improve RPC.Server's digest auth

2012-09-11 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8783:


Attachment: HADOOP-8783.patch

Removing hdfs change accidentally included in patch.

> Improve RPC.Server's digest auth
> 
>
> Key: HADOOP-8783
> URL: https://issues.apache.org/jira/browse/HADOOP-8783
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-8783.patch, HADOOP-8783.patch
>
>
> RPC.Server should always allow digest auth (tokens) if a secret manager if 
> present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452982#comment-13452982
 ] 

Hudson commented on HADOOP-8781:


Integrated in Hadoop-Hdfs-trunk #1162 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1162/])
HADOOP-8781. hadoop-config.sh should add JAVA_LIBRARY_PATH to 
LD_LIBRARY_PATH. (tucu) (Revision 1383142)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383142
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh


> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452983#comment-13452983
 ] 

Hudson commented on HADOOP-8786:


Integrated in Hadoop-Hdfs-trunk #1162 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1162/])
HADOOP-8786. HttpServer continues to start even if AuthenticationFilter 
fails to init. Contributed by Todd Lipcon. (Revision 1383254)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383254
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java


> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452827#comment-13452827
 ] 

Uma Maheswara Rao G commented on HADOOP-8786:
-

Sure, I will take a look.

> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452799#comment-13452799
 ] 

Hudson commented on HADOOP-8786:


Integrated in Hadoop-Mapreduce-trunk-Commit #2739 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2739/])
HADOOP-8786. HttpServer continues to start even if AuthenticationFilter 
fails to init. Contributed by Todd Lipcon. (Revision 1383254)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383254
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java


> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452798#comment-13452798
 ] 

Hadoop QA commented on HADOOP-8767:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544601/patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1435//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1435//console

This message is automatically generated.

> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Priority: Minor
> Fix For: site, 1.0.3
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HADOOP-8786:
-


k, reopening for backport. Do you have time to do them?

> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8786) HttpServer continues to start even if AuthenticationFilter fails to init

2012-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452787#comment-13452787
 ] 

Hudson commented on HADOOP-8786:


Integrated in Hadoop-Common-trunk-Commit #2715 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2715/])
HADOOP-8786. HttpServer continues to start even if AuthenticationFilter 
fails to init. Contributed by Todd Lipcon. (Revision 1383254)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383254
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java


> HttpServer continues to start even if AuthenticationFilter fails to init
> 
>
> Key: HADOOP-8786
> URL: https://issues.apache.org/jira/browse/HADOOP-8786
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.0, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0
>
> Attachments: hadoop-8786.txt
>
>
> As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the 
> web server will continue to start up. We need to check for context 
> initialization errors after starting the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira