[jira] [Created] (HADOOP-9307) BufferedFSInputStream.read returns wrong results after certain seeks

2013-02-14 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-9307:
---

 Summary: BufferedFSInputStream.read returns wrong results after 
certain seeks
 Key: HADOOP-9307
 URL: https://issues.apache.org/jira/browse/HADOOP-9307
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon


After certain sequences of seek/read, BufferedFSInputStream can silently return 
data from the wrong part of the file. Further description in first comment 
below.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9307) BufferedFSInputStream.read returns wrong results after certain seeks

2013-02-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578244#comment-13578244
 ] 

Todd Lipcon commented on HADOOP-9307:
-

An example sequence of seeks which returns the wrong data is as follows, 
assuming a 4096-byte buffer:

{code}
seek(0);
readFully(1);
{code}

This primes the buffer. After this, the current state of the buffered stream is 
{{pos=0, count=4096, filepos=4096}}

{code}
seek(2000);
{code}

The seek sees that the required data is in already in the buffer, and just sets 
{{pos=2000}}

{code}
readFully(1);
{code}

This first copies the remaining bytes from the buffer and sets {{pos=4096}}. 
Then, because 5904 bytes are remaining, and this is larger than the buffer 
size, it copies them directly into the user-supplied output buffer. This leaves 
the state of the stream at {{pos=4096, count=4096, filepos=12000}}

{code}
seek(11000);
{code}

The optimization in BufferedFSInputStream sees that there are 4096 buffered 
bytes, and that this seek is supposedly within the window, assuming that those 
4096 bytes directly precede filepos. So, it erroneously just sets {{pos=3096}}.

The next read will then get the wrong results for the first 1000 bytes -- 
yielding bytes 3096-4096 of the file instead of bytes 11000-12000.

 BufferedFSInputStream.read returns wrong results after certain seeks
 

 Key: HADOOP-9307
 URL: https://issues.apache.org/jira/browse/HADOOP-9307
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon

 After certain sequences of seek/read, BufferedFSInputStream can silently 
 return data from the wrong part of the file. Further description in first 
 comment below.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9307) BufferedFSInputStream.read returns wrong results after certain seeks

2013-02-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578275#comment-13578275
 ] 

Steve Loughran commented on HADOOP-9307:


Interesting. I saw some quirks with data read/writes talking to OpenStack 
swift, but felt that was eventual consistency related, not buffering. If you 
look in {{FileSystemContractBaseTest}} there's some updated code for creating 
test datasets and comparing byte arrays in files -that comparison code could be 
teased out, and/or a new test added to the contract if you seek(offset) then 
readFully(bytes[]), you get the data at 
file[offset]...file[offset+bytes.length-1]

Let me add that to my list of things we assume that a filesystem does.


 BufferedFSInputStream.read returns wrong results after certain seeks
 

 Key: HADOOP-9307
 URL: https://issues.apache.org/jira/browse/HADOOP-9307
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon

 After certain sequences of seek/read, BufferedFSInputStream can silently 
 return data from the wrong part of the file. Further description in first 
 comment below.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578286#comment-13578286
 ] 

Hudson commented on HADOOP-9117:


Integrated in Hadoop-Yarn-trunk #127 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/127/])
HADOOP-9117. replace protoc ant plugin exec with a maven plugin. (tucu) 
(Revision 1445956)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445956
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9305) Add support for running the Hadoop client on 64-bit AIX

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578287#comment-13578287
 ] 

Hudson commented on HADOOP-9305:


Integrated in Hadoop-Yarn-trunk #127 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/127/])
HADOOP-9305. Add support for running the Hadoop client on 64-bit AIX. 
Contributed by Aaron T. Myers. (Revision 1445884)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445884
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


 Add support for running the Hadoop client on 64-bit AIX
 ---

 Key: HADOOP-9305
 URL: https://issues.apache.org/jira/browse/HADOOP-9305
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9305.patch


 HADOOP-9283 added support for running the Hadoop client on AIX, but only with 
 32-bit JREs. This JIRA is to add support for 64-bit JREs as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578288#comment-13578288
 ] 

Hudson commented on HADOOP-9303:


Integrated in Hadoop-Yarn-trunk #127 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/127/])
HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage 
option (Andy Isaacson via tgraves) (Revision 1445656)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445656
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm


 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9297) remove old record IO generation and tests

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578289#comment-13578289
 ] 

Hudson commented on HADOOP-9297:


Integrated in Hadoop-Yarn-trunk #127 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/127/])
HADOOP-9297. remove old record IO generation and tests. (tucu) (Revision 
1446044)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446044
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/int.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/string.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/test.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java


 remove old record IO generation and tests
 -

 Key: HADOOP-9297
 URL: https://issues.apache.org/jira/browse/HADOOP-9297
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9297.patch


 Remove their processing from the common POM and delete the following files:
 {code}
 hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
 hadoop-common-project/hadoop-common/src/test/ddl/int.jr
 hadoop-common-project/hadoop-common/src/test/ddl/string.jr
 hadoop-common-project/hadoop-common/src/test/ddl/test.jr
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java
 {code}
 All these code is used exclusively within the files being removed. It does 
 not affect any component in a live cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9302) HDFS docs not linked from top level

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578290#comment-13578290
 ] 

Hudson commented on HADOOP-9302:


Integrated in Hadoop-Yarn-trunk #127 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/127/])
HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via 
tgraves) (Revision 1445635)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445635
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9302) HDFS docs not linked from top level

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578324#comment-13578324
 ] 

Hudson commented on HADOOP-9302:


Integrated in Hadoop-Hdfs-0.23-Build #525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/525/])
HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via 
tgraves) (Revision 1445649)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445649
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-project/src/site/site.xml


 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578323#comment-13578323
 ] 

Hudson commented on HADOOP-9303:


Integrated in Hadoop-Hdfs-0.23-Build #525 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/525/])
HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage 
option (Andy Isaacson via tgraves) (Revision 1445658)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445658
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm


 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9305) Add support for running the Hadoop client on 64-bit AIX

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578341#comment-13578341
 ] 

Hudson commented on HADOOP-9305:


Integrated in Hadoop-Hdfs-trunk #1316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1316/])
HADOOP-9305. Add support for running the Hadoop client on 64-bit AIX. 
Contributed by Aaron T. Myers. (Revision 1445884)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445884
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


 Add support for running the Hadoop client on 64-bit AIX
 ---

 Key: HADOOP-9305
 URL: https://issues.apache.org/jira/browse/HADOOP-9305
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9305.patch


 HADOOP-9283 added support for running the Hadoop client on AIX, but only with 
 32-bit JREs. This JIRA is to add support for 64-bit JREs as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578340#comment-13578340
 ] 

Hudson commented on HADOOP-9117:


Integrated in Hadoop-Hdfs-trunk #1316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1316/])
HADOOP-9117. replace protoc ant plugin exec with a maven plugin. (tucu) 
(Revision 1445956)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445956
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578342#comment-13578342
 ] 

Hudson commented on HADOOP-9303:


Integrated in Hadoop-Hdfs-trunk #1316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1316/])
HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage 
option (Andy Isaacson via tgraves) (Revision 1445656)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445656
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm


 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9297) remove old record IO generation and tests

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578343#comment-13578343
 ] 

Hudson commented on HADOOP-9297:


Integrated in Hadoop-Hdfs-trunk #1316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1316/])
HADOOP-9297. remove old record IO generation and tests. (tucu) (Revision 
1446044)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446044
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/int.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/string.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/test.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java


 remove old record IO generation and tests
 -

 Key: HADOOP-9297
 URL: https://issues.apache.org/jira/browse/HADOOP-9297
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9297.patch


 Remove their processing from the common POM and delete the following files:
 {code}
 hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
 hadoop-common-project/hadoop-common/src/test/ddl/int.jr
 hadoop-common-project/hadoop-common/src/test/ddl/string.jr
 hadoop-common-project/hadoop-common/src/test/ddl/test.jr
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java
 {code}
 All these code is used exclusively within the files being removed. It does 
 not affect any component in a live cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9302) HDFS docs not linked from top level

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578344#comment-13578344
 ] 

Hudson commented on HADOOP-9302:


Integrated in Hadoop-Hdfs-trunk #1316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1316/])
HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via 
tgraves) (Revision 1445635)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445635
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578382#comment-13578382
 ] 

Hudson commented on HADOOP-9117:


Integrated in Hadoop-Mapreduce-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1344/])
HADOOP-9117. replace protoc ant plugin exec with a maven plugin. (tucu) 
(Revision 1445956)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445956
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9302) HDFS docs not linked from top level

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578386#comment-13578386
 ] 

Hudson commented on HADOOP-9302:


Integrated in Hadoop-Mapreduce-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1344/])
HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via 
tgraves) (Revision 1445635)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445635
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/src/site/site.xml


 HDFS docs not linked from top level
 ---

 Key: HADOOP-9302
 URL: https://issues.apache.org/jira/browse/HADOOP-9302
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4460-1.txt, hdfs4460-2.patch, hdfs4460.txt


 HADOOP-9221 and others converted docs to apt format. After that they aren't 
 linked to the top level menu like: http://hadoop.apache.org/docs/current/
 I only see the hadoop commands manual and the Filesystem shell. It used to be 
 you clicked on say the commands manual and you would go to the old style 
 documentation where it had a menu with links to the Superusers, native 
 libraries, etc, but I don't see that any more since converted. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9305) Add support for running the Hadoop client on 64-bit AIX

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578383#comment-13578383
 ] 

Hudson commented on HADOOP-9305:


Integrated in Hadoop-Mapreduce-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1344/])
HADOOP-9305. Add support for running the Hadoop client on 64-bit AIX. 
Contributed by Aaron T. Myers. (Revision 1445884)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445884
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


 Add support for running the Hadoop client on 64-bit AIX
 ---

 Key: HADOOP-9305
 URL: https://issues.apache.org/jira/browse/HADOOP-9305
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9305.patch


 HADOOP-9283 added support for running the Hadoop client on AIX, but only with 
 32-bit JREs. This JIRA is to add support for 64-bit JREs as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9303) command manual dfsadmin missing entry for restoreFailedStorage option

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578384#comment-13578384
 ] 

Hudson commented on HADOOP-9303:


Integrated in Hadoop-Mapreduce-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1344/])
HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage 
option (Andy Isaacson via tgraves) (Revision 1445656)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1445656
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm


 command manual dfsadmin missing entry for restoreFailedStorage option
 -

 Key: HADOOP-9303
 URL: https://issues.apache.org/jira/browse/HADOOP-9303
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thomas Graves
Assignee: Andy Isaacson
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: hdfs4459.txt


 Generating the latest site docs it doesn't show the -restoreFailedStorage 
 option under the dfsadmin section of commands_manual.html
 Also it appears the table header is concatenated with the first row:
 COMMAND_OPTION -report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9297) remove old record IO generation and tests

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578385#comment-13578385
 ] 

Hudson commented on HADOOP-9297:


Integrated in Hadoop-Mapreduce-trunk #1344 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1344/])
HADOOP-9297. remove old record IO generation and tests. (tucu) (Revision 
1446044)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446044
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/int.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/string.jr
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/ddl/test.jr
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java


 remove old record IO generation and tests
 -

 Key: HADOOP-9297
 URL: https://issues.apache.org/jira/browse/HADOOP-9297
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9297.patch


 Remove their processing from the common POM and delete the following files:
 {code}
 hadoop-common-project/hadoop-common/src/test/ddl/buffer.jr
 hadoop-common-project/hadoop-common/src/test/ddl/int.jr
 hadoop-common-project/hadoop-common/src/test/ddl/string.jr
 hadoop-common-project/hadoop-common/src/test/ddl/test.jr
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/FromCpp.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/RecordBench.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestBuffer.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordIO.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/TestRecordVersioning.java
 hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/record/ToCpp.java
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/typedbytes/TestIO.java
 {code}
 All these code is used exclusively within the files being removed. It does 
 not affect any component in a live cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9154) SortedMapWritable#putAll() doesn't add key/value classes to the map

2013-02-14 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9154:
--

   Resolution: Fixed
Fix Version/s: 2.0.4-beta
   1.2.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks Karthik!

 SortedMapWritable#putAll() doesn't add key/value classes to the map
 ---

 Key: HADOOP-9154
 URL: https://issues.apache.org/jira/browse/HADOOP-9154
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 1.2.0, 2.0.4-beta

 Attachments: HADOOP-9124.patch, hadoop-9154-branch1.patch, 
 hadoop-9154-draft.patch, hadoop-9154-draft.patch, hadoop-9154.patch, 
 hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch


 In the following code from {{SortedMapWritable}}, #putAll() doesn't add 
 key/value classes to the class-id maps.
 {code}
   @Override
   public Writable put(WritableComparable key, Writable value) {
 addToMap(key.getClass());
 addToMap(value.getClass());
 return instance.put(key, value);
   }
   @Override
   public void putAll(Map? extends WritableComparable, ? extends Writable t){
 for (Map.Entry? extends WritableComparable, ? extends Writable e:
   t.entrySet()) {
   
   instance.put(e.getKey(), e.getValue());
 }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9154) SortedMapWritable#putAll() doesn't add key/value classes to the map

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578393#comment-13578393
 ] 

Hudson commented on HADOOP-9154:


Integrated in Hadoop-trunk-Commit #3361 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3361/])
HADOOP-9154. SortedMapWritable#putAll() doesn't add key/value classes to 
the map. Contributed by Karthik Kambatla. (Revision 1446183)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446183
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/AbstractMapWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SortedMapWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSortedMapWritable.java


 SortedMapWritable#putAll() doesn't add key/value classes to the map
 ---

 Key: HADOOP-9154
 URL: https://issues.apache.org/jira/browse/HADOOP-9154
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 1.2.0, 2.0.4-beta

 Attachments: HADOOP-9124.patch, hadoop-9154-branch1.patch, 
 hadoop-9154-draft.patch, hadoop-9154-draft.patch, hadoop-9154.patch, 
 hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch


 In the following code from {{SortedMapWritable}}, #putAll() doesn't add 
 key/value classes to the class-id maps.
 {code}
   @Override
   public Writable put(WritableComparable key, Writable value) {
 addToMap(key.getClass());
 addToMap(value.getClass());
 return instance.put(key, value);
   }
   @Override
   public void putAll(Map? extends WritableComparable, ? extends Writable t){
 for (Map.Entry? extends WritableComparable, ? extends Writable e:
   t.entrySet()) {
   
   instance.put(e.getKey(), e.getValue());
 }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9154) SortedMapWritable#putAll() doesn't add key/value classes to the map

2013-02-14 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9154:
--

Fix Version/s: 0.23.7

 SortedMapWritable#putAll() doesn't add key/value classes to the map
 ---

 Key: HADOOP-9154
 URL: https://issues.apache.org/jira/browse/HADOOP-9154
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9124.patch, hadoop-9154-branch1.patch, 
 hadoop-9154-draft.patch, hadoop-9154-draft.patch, hadoop-9154.patch, 
 hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch, hadoop-9154.patch


 In the following code from {{SortedMapWritable}}, #putAll() doesn't add 
 key/value classes to the class-id maps.
 {code}
   @Override
   public Writable put(WritableComparable key, Writable value) {
 addToMap(key.getClass());
 addToMap(value.getClass());
 return instance.put(key, value);
   }
   @Override
   public void putAll(Map? extends WritableComparable, ? extends Writable t){
 for (Map.Entry? extends WritableComparable, ? extends Writable e:
   t.entrySet()) {
   
   instance.put(e.getKey(), e.getValue());
 }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9307) BufferedFSInputStream.read returns wrong results after certain seeks

2013-02-14 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578514#comment-13578514
 ] 

Todd Lipcon commented on HADOOP-9307:
-

Yea, I have a randomized test case that finds this bug within a few seconds - 
basically a copy of one that I wrote for HDFS a couple years ago. Will upload 
it with a bugfix patch hopefully later today, but maybe early next week (pretty 
busy next two days). FWIW the fix is simple -- just need to add {{(this.pos != 
this.count)}} into the condition to run the seek-in-buffer optimization

 BufferedFSInputStream.read returns wrong results after certain seeks
 

 Key: HADOOP-9307
 URL: https://issues.apache.org/jira/browse/HADOOP-9307
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon

 After certain sequences of seek/read, BufferedFSInputStream can silently 
 return data from the wrong part of the file. Further description in first 
 comment below.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9308) Typo in javadoc for IdentityMapper class

2013-02-14 Thread Adam Monsen (JIRA)
Adam Monsen created HADOOP-9308:
---

 Summary: Typo in javadoc for IdentityMapper class
 Key: HADOOP-9308
 URL: https://issues.apache.org/jira/browse/HADOOP-9308
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Adam Monsen


IdentityMapper.map() is incorrectly documented as the identify function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9308) Typo in javadoc for IdentityMapper class

2013-02-14 Thread Adam Monsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Monsen updated HADOOP-9308:


Attachment: HADOOP-9308.patch

 Typo in javadoc for IdentityMapper class
 

 Key: HADOOP-9308
 URL: https://issues.apache.org/jira/browse/HADOOP-9308
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Adam Monsen
 Attachments: HADOOP-9308.patch


 IdentityMapper.map() is incorrectly documented as the identify function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9308) Typo in javadoc for IdentityMapper class

2013-02-14 Thread Adam Monsen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578562#comment-13578562
 ] 

Adam Monsen commented on HADOOP-9308:
-

I grant license to the ASF to include the attached patch in ASF works.

 Typo in javadoc for IdentityMapper class
 

 Key: HADOOP-9308
 URL: https://issues.apache.org/jira/browse/HADOOP-9308
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Adam Monsen
 Attachments: HADOOP-9308.patch


 IdentityMapper.map() is incorrectly documented as the identify function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9304) remove addition of avro genreated-sources dirs to build

2013-02-14 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578564#comment-13578564
 ] 

Tom White commented on HADOOP-9304:
---

+1 I tested with Eclipse.

 remove addition of avro genreated-sources dirs to build
 ---

 Key: HADOOP-9304
 URL: https://issues.apache.org/jira/browse/HADOOP-9304
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-9304.patch


 The avro maven plugin automatically adds those dirs to the source dirs of the 
 module.
 this is just a POM clean up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9304) remove addition of avro genreated-sources dirs to build

2013-02-14 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-9304:
---

   Resolution: Fixed
Fix Version/s: 2.0.4-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

 remove addition of avro genreated-sources dirs to build
 ---

 Key: HADOOP-9304
 URL: https://issues.apache.org/jira/browse/HADOOP-9304
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9304.patch


 The avro maven plugin automatically adds those dirs to the source dirs of the 
 module.
 this is just a POM clean up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9304) remove addition of avro genreated-sources dirs to build

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578569#comment-13578569
 ] 

Hudson commented on HADOOP-9304:


Integrated in Hadoop-trunk-Commit #3362 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3362/])
HADOOP-9304. remove addition of avro genreated-sources dirs to build. 
(tucu) (Revision 1446296)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446296
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml


 remove addition of avro genreated-sources dirs to build
 ---

 Key: HADOOP-9304
 URL: https://issues.apache.org/jira/browse/HADOOP-9304
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.4-beta
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9304.patch


 The avro maven plugin automatically adds those dirs to the source dirs of the 
 module.
 this is just a POM clean up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9218) Document the Rpc-wrappers used internally

2013-02-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578593#comment-13578593
 ] 

Suresh Srinivas commented on HADOOP-9218:
-

+1 for the change.

 Document the Rpc-wrappers used internally
 -

 Key: HADOOP-9218
 URL: https://issues.apache.org/jira/browse/HADOOP-9218
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: hadoop-9218.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9172) Balancer failure with nameservice configuration.

2013-02-14 Thread Chu Tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chu Tong updated HADOOP-9172:
-

Attachment: HADOOP-9172.patch

I had the same problem with balancer on my dev cluster and this is the solution 
to fix it.

 Balancer failure with nameservice configuration.
 

 Key: HADOOP-9172
 URL: https://issues.apache.org/jira/browse/HADOOP-9172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, fs
Affects Versions: 2.0.2-alpha
 Environment: OS: Mac OS X Server 10.6.8/ Linux 2.6.32 x86_64
Reporter: QueryIO
  Labels: hadoop
 Attachments: HADOOP-9172.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This set of properties ...
 propertynamedfs.namenode.https-address.NameNode1/namevalue192.168.0.10:50470/value/property
 propertynamedfs.namenode.http-address.NameNode1/namevalue192.168.0.10:50070/value/property
 propertynamedfs.namenode.rpc-address.NameNode1/namevalue192.168.0.10:9000/value/property
 propertynamedfs.nameservice.id/namevalueNameNode1/value/property
 propertynamedfs.nameservices/namevalueNameNode1/value/property
 gives following issue while running balancer ...
 2012-12-27 15:42:36,193 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 namenodes = [hdfs://queryio10.local:9000, hdfs://192.168.0.10:9000]
 2012-12-27 15:42:36,194 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 p = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,570 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
  No lease on /system/balancer.id File does not exist. Holder 
 DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)
   at org.apache.hadoop.ipc.Client.call(Client.java:1164)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
   at $Proxy10.addBlock(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at $Proxy10.addBlock(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:285)
   at 
 

[jira] [Updated] (HADOOP-9172) Balancer failure with nameservice configuration.

2013-02-14 Thread Chu Tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chu Tong updated HADOOP-9172:
-

Status: Patch Available  (was: Open)

 Balancer failure with nameservice configuration.
 

 Key: HADOOP-9172
 URL: https://issues.apache.org/jira/browse/HADOOP-9172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, fs
Affects Versions: 2.0.2-alpha
 Environment: OS: Mac OS X Server 10.6.8/ Linux 2.6.32 x86_64
Reporter: QueryIO
  Labels: hadoop
 Attachments: HADOOP-9172.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This set of properties ...
 propertynamedfs.namenode.https-address.NameNode1/namevalue192.168.0.10:50470/value/property
 propertynamedfs.namenode.http-address.NameNode1/namevalue192.168.0.10:50070/value/property
 propertynamedfs.namenode.rpc-address.NameNode1/namevalue192.168.0.10:9000/value/property
 propertynamedfs.nameservice.id/namevalueNameNode1/value/property
 propertynamedfs.nameservices/namevalueNameNode1/value/property
 gives following issue while running balancer ...
 2012-12-27 15:42:36,193 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 namenodes = [hdfs://queryio10.local:9000, hdfs://192.168.0.10:9000]
 2012-12-27 15:42:36,194 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 p = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,570 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
  No lease on /system/balancer.id File does not exist. Holder 
 DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)
   at org.apache.hadoop.ipc.Client.call(Client.java:1164)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
   at $Proxy10.addBlock(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at $Proxy10.addBlock(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:285)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
   at 
 

[jira] [Created] (HADOOP-9309) test failures on Windows due UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy

2013-02-14 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-9309:
-

 Summary: test failures on Windows due UnsatisfiedLinkError in 
NativeCodeLoader#buildSupportsSnappy
 Key: HADOOP-9309
 URL: https://issues.apache.org/jira/browse/HADOOP-9309
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Checking for Snappy support calls native method 
{{NativeCodeLoader#buildSupportsSnappy}}.  This method has not been implemented 
for Windows in hadoop.dll, so it throws {{UnsatisfiedLinkError}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9309) test failures on Windows due to UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy

2013-02-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9309:
--

Summary: test failures on Windows due to UnsatisfiedLinkError in 
NativeCodeLoader#buildSupportsSnappy  (was: test failures on Windows due 
UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy)

 test failures on Windows due to UnsatisfiedLinkError in 
 NativeCodeLoader#buildSupportsSnappy
 

 Key: HADOOP-9309
 URL: https://issues.apache.org/jira/browse/HADOOP-9309
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 Checking for Snappy support calls native method 
 {{NativeCodeLoader#buildSupportsSnappy}}.  This method has not been 
 implemented for Windows in hadoop.dll, so it throws {{UnsatisfiedLinkError}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9308) Typo in javadoc for IdentityMapper class

2013-02-14 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9308:


Status: Patch Available  (was: Open)

 Typo in javadoc for IdentityMapper class
 

 Key: HADOOP-9308
 URL: https://issues.apache.org/jira/browse/HADOOP-9308
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Adam Monsen
 Attachments: HADOOP-9308.patch


 IdentityMapper.map() is incorrectly documented as the identify function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-02-14 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-9232:
---

Attachment: HADOOP-9232.branch-trunk-win.jnigroups.3.patch

Addressing Chris' comments.

 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-02-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-9299:
---

Target Version/s: 2.0.4-beta
   Fix Version/s: (was: 2.0.3-alpha)

Changing targets version to 2.0.4, since 2.0.3 has already been released.

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker

 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9172) Balancer failure with nameservice configuration.

2013-02-14 Thread Chu Tong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chu Tong updated HADOOP-9172:
-

Labels: balancer hdfs  (was: hadoop)

 Balancer failure with nameservice configuration.
 

 Key: HADOOP-9172
 URL: https://issues.apache.org/jira/browse/HADOOP-9172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, fs
Affects Versions: 2.0.2-alpha
 Environment: OS: Mac OS X Server 10.6.8/ Linux 2.6.32 x86_64
Reporter: QueryIO
  Labels: balancer, hdfs
 Attachments: HADOOP-9172.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This set of properties ...
 propertynamedfs.namenode.https-address.NameNode1/namevalue192.168.0.10:50470/value/property
 propertynamedfs.namenode.http-address.NameNode1/namevalue192.168.0.10:50070/value/property
 propertynamedfs.namenode.rpc-address.NameNode1/namevalue192.168.0.10:9000/value/property
 propertynamedfs.nameservice.id/namevalueNameNode1/value/property
 propertynamedfs.nameservices/namevalueNameNode1/value/property
 gives following issue while running balancer ...
 2012-12-27 15:42:36,193 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 namenodes = [hdfs://queryio10.local:9000, hdfs://192.168.0.10:9000]
 2012-12-27 15:42:36,194 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 p = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,570 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
  No lease on /system/balancer.id File does not exist. Holder 
 DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)
   at org.apache.hadoop.ipc.Client.call(Client.java:1164)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
   at $Proxy10.addBlock(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at $Proxy10.addBlock(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:285)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
   at 
 

[jira] [Commented] (HADOOP-9308) Typo in javadoc for IdentityMapper class

2013-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578756#comment-13578756
 ] 

Hadoop QA commented on HADOOP-9308:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12569365/HADOOP-9308.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2195//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2195//console

This message is automatically generated.

 Typo in javadoc for IdentityMapper class
 

 Key: HADOOP-9308
 URL: https://issues.apache.org/jira/browse/HADOOP-9308
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Adam Monsen
 Attachments: HADOOP-9308.patch


 IdentityMapper.map() is incorrectly documented as the identify function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-02-14 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578764#comment-13578764
 ] 

Ivan Mitic commented on HADOOP-9232:


Thanks Chris for the review, all good comments!

My responses are below:

1. The functions are actually slightly different and I wanted to keep them that 
way. Nativeio function throws a NativeIOException exception (not IOException). 
Also, if you take a closer look nativeio exposes some static initialization 
methods which are used to initialize nioe_clazz and nioe_ctor, so all is 
encapsulated within nativeio and I wanted to keep it this way. Let me know if 
this sounds good. 

2. Yes, this is what we need. THROW macro definition expects char* (not 
WCHAR*), hence the difference. In nativeio#throw_ie, we use a slightly 
different conversion patterns

3. Fixed

4. I saw this as well. I inherited the problem from the linux implementation. 
As you said, it seems that the only side effect could be an extra memory 
allocation which isn't too bad. 

5. Thanks, fixed

Attached is the updated patch. Let me know if it looks good. 

 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9172) Balancer failure with nameservice configuration.

2013-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578770#comment-13578770
 ] 

Hadoop QA commented on HADOOP-9172:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12569389/HADOOP-9172.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2194//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2194//console

This message is automatically generated.

 Balancer failure with nameservice configuration.
 

 Key: HADOOP-9172
 URL: https://issues.apache.org/jira/browse/HADOOP-9172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, fs
Affects Versions: 2.0.2-alpha
 Environment: OS: Mac OS X Server 10.6.8/ Linux 2.6.32 x86_64
Reporter: QueryIO
  Labels: balancer, hdfs
 Attachments: HADOOP-9172.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 This set of properties ...
 propertynamedfs.namenode.https-address.NameNode1/namevalue192.168.0.10:50470/value/property
 propertynamedfs.namenode.http-address.NameNode1/namevalue192.168.0.10:50070/value/property
 propertynamedfs.namenode.rpc-address.NameNode1/namevalue192.168.0.10:9000/value/property
 propertynamedfs.nameservice.id/namevalueNameNode1/value/property
 propertynamedfs.nameservices/namevalueNameNode1/value/property
 gives following issue while running balancer ...
 2012-12-27 15:42:36,193 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 namenodes = [hdfs://queryio10.local:9000, hdfs://192.168.0.10:9000]
 2012-12-27 15:42:36,194 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 p = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
 new node: /default-rack/192.168.0.10:50010
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 over-utilized: []
 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
 0 underutilized: []
 2012-12-27 15:42:37,570 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
 Exception
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
  No lease on /system/balancer.id File does not exist. Holder 
 DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
   at 

[jira] [Commented] (HADOOP-8989) hadoop dfs -find feature

2013-02-14 Thread Jonathan Allen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578772#comment-13578772
 ] 

Jonathan Allen commented on HADOOP-8989:


Luke - a couple of questions about your comments:
1) where are you thinking of here? I can't find any @inheritDoc on an inner 
class so I feel I'm missing something? I notice inheritDoc isn't used very much 
throughout the code, is it generally frowned on?
2) completely agree with the thought but this isn't something I can fix without 
changing nearly all of the fs.shell classes and this jira seems the wrong place 
to do that; happy to be guided by you and Daryn if you think I should.

 hadoop dfs -find feature
 

 Key: HADOOP-8989
 URL: https://issues.apache.org/jira/browse/HADOOP-8989
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Marco Nicosia
Assignee: Jonathan Allen
 Attachments: HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, 
 HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch, HADOOP-8989.patch


 Both sysadmins and users make frequent use of the unix 'find' command, but 
 Hadoop has no correlate. Without this, users are writing scripts which make 
 heavy use of hadoop dfs -lsr, and implementing find one-offs. I think hdfs 
 -lsr is somewhat taxing on the NameNode, and a really slow experience on the 
 client side. Possibly an in-NameNode find operation would be only a bit more 
 taxing on the NameNode, but significantly faster from the client's point of 
 view?
 The minimum set of options I can think of which would make a Hadoop find 
 command generally useful is (in priority order):
 * -type (file or directory, for now)
 * -atime/-ctime-mtime (... and -creationtime?) (both + and - arguments)
 * -print0 (for piping to xargs -0)
 * -depth
 * -owner/-group (and -nouser/-nogroup)
 * -name (allowing for shell pattern, or even regex?)
 * -perm
 * -size
 One possible special case, but could possibly be really cool if it ran from 
 within the NameNode:
 * -delete
 The hadoop dfs -lsr | hadoop dfs -rm cycle is really, really slow.
 Lower priority, some people do use operators, mostly to execute -or searches 
 such as:
 * find / \(-nouser -or -nogroup\)
 Finally, I thought I'd include a link to the [Posix spec for 
 find|http://www.opengroup.org/onlinepubs/009695399/utilities/find.html]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9218) Document the Rpc-wrappers used internally

2013-02-14 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-9218:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Document the Rpc-wrappers used internally
 -

 Key: HADOOP-9218
 URL: https://issues.apache.org/jira/browse/HADOOP-9218
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: hadoop-9218.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9218) Document the Rpc-wrappers used internally

2013-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578904#comment-13578904
 ] 

Hudson commented on HADOOP-9218:


Integrated in Hadoop-trunk-Commit #3363 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3363/])
HADOOP-9218 Document the Rpc-wrappers used internally (sanjay Radia) 
(Revision 1446428)

 Result = SUCCESS
sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1446428
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java


 Document the Rpc-wrappers used internally
 -

 Key: HADOOP-9218
 URL: https://issues.apache.org/jira/browse/HADOOP-9218
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: hadoop-9218.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira