[jira] [Commented] (HDFS-3921) NN will prematurely consider blocks missing when entering active state while still in safe mode

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492988#comment-13492988
 ] 

Hadoop QA commented on HDFS-3921:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552577/HDFS-3921.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3465//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3465//console

This message is automatically generated.

> NN will prematurely consider blocks missing when entering active state while 
> still in safe mode
> ---
>
> Key: HDFS-3921
> URL: https://issues.apache.org/jira/browse/HDFS-3921
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3921.patch
>
>
> I shut down all the HDFS daemons in an Highly Available (automatic failover) 
> cluster.
> Then I started one NN and it transitioned it to active. No DNs were started, 
> and I saw the red warning link on the NN web UI:
> WARNING : There are 36 missing blocks. Please check the logs or run fsck in 
> order to identify the missing blocks.
> I clicked this to go to the corrupt_files.jsp page, which ran into the 
> following error:
> {noformat}
> HTTP ERROR 500
> Problem accessing /corrupt_files.jsp. Reason:
> Cannot run listCorruptFileBlocks because replication queues have not been 
> initialized.
> Caused by:
> java.io.IOException: Cannot run listCorruptFileBlocks because replication 
> queues have not been initialized.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCorruptFileBlocks(FSNamesystem.java:5035)
>   at 
> org.apache.hadoop.hdfs.server.namenode.corrupt_005ffiles_jsp._jspService(corrupt_005ffiles_jsp.java:78)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1039)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>  

[jira] [Commented] (HDFS-4165) Faulty sanity check in FsDirectory.unprotectedSetQuota

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492962#comment-13492962
 ] 

Hadoop QA commented on HDFS-4165:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552588/HDFS-4165.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3464//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3464//console

This message is automatically generated.

> Faulty sanity check in FsDirectory.unprotectedSetQuota
> --
>
> Key: HDFS-4165
> URL: https://issues.apache.org/jira/browse/HDFS-4165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Trivial
> Attachments: HDFS-4165.patch
>
>
> According to the documentation:
> The quota can have three types of values : (1) 0 or more will set 
> the quota to that value, (2) {@link HdfsConstants#QUOTA_DONT_SET}  implies 
> the quota will not be changed, and (3) {@link HdfsConstants#QUOTA_RESET} 
> implies the quota will be reset. Any other value is a runtime error.
> sanity check in FsDirectory.unprotectedSetQuota should use 
> {code}
> nsQuota != HdfsConstants.QUOTA_RESET
> {code}
> rather than
> {code}
> nsQuota < HdfsConstants.QUOTA_RESET
> {code}
> Since HdfsConstants.QUOTA_RESET is defined to be -1, there is not any problem 
> for this code, but it is better to do it right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492927#comment-13492927
 ] 

Hadoop QA commented on HDFS-4164:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552586/HDFS-4164.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3463//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3463//console

This message is automatically generated.

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4164.001.patch
>
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-11-07 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-4108:


Attachment: (was: HDFS-4108-1-2.patch)

> In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
> list , gives an error
> -
>
> Key: HDFS-4108
> URL: https://issues.apache.org/jira/browse/HDFS-4108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch
>
>
> This issue happens in secure cluster.
> To reproduce :
> Go to the NameNode WEB UI. (dfshealth.jsp)
> Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
> Click on a datanode to bring up the filesystem  web page ( 
> browsedirectory.jsp)
> The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-11-07 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-4108:


Comment: was deleted

(was: Ok, I think I figured the problem...

Benoy's patch assumes MAPREDUCE-4661 which isn't in yet - so I rebased it work 
without that dependency (attached) and committed it so that tonight's build can 
go through with this.

Benoy - pls take a look, if you have reservations we can fix it via a follow on 
patch. Thanks!)

> In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
> list , gives an error
> -
>
> Key: HDFS-4108
> URL: https://issues.apache.org/jira/browse/HDFS-4108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch, 
> HDFS-4108-1-2.patch
>
>
> This issue happens in secure cluster.
> To reproduce :
> Go to the NameNode WEB UI. (dfshealth.jsp)
> Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
> Click on a datanode to bring up the filesystem  web page ( 
> browsedirectory.jsp)
> The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-11-07 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-4108:


Attachment: HDFS-4108-1-2.patch

Ok, I think I figured the problem...

Benoy's patch assumes MAPREDUCE-4661 which isn't in yet - so I rebased it work 
without that dependency (attached) and committed it so that tonight's build can 
go through with this.

Benoy - pls take a look, if you have reservations we can fix it via a follow on 
patch. Thanks!

> In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
> list , gives an error
> -
>
> Key: HDFS-4108
> URL: https://issues.apache.org/jira/browse/HDFS-4108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch, 
> HDFS-4108-1-2.patch
>
>
> This issue happens in secure cluster.
> To reproduce :
> Go to the NameNode WEB UI. (dfshealth.jsp)
> Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
> Click on a datanode to bring up the filesystem  web page ( 
> browsedirectory.jsp)
> The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492905#comment-13492905
 ] 

Colin Patrick McCabe commented on HDFS-4164:


The build failure looks like this:

{code}
 23:43:12   [exec] /usr/bin/gcc   -g -Wall -O2 -D_GNU_SOURCE
-D_REENTRANT -D_FILE_OFFSET_BITS=64 -D_FILE_OFFSET_BITS=64
-I/usr/include/fuseCMakeFiles/fuse_dfs.dir/fuse_dfs.c.o
CMakeFiles/fuse_dfs.dir/fuse_options.c.o
CMakeFiles/fuse_dfs.dir/fuse_connect.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_access.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_chmod.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_chown.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_create.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_flush.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_getattr.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_mkdir.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_mknod.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_open.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_read.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_readdir.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_release.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_rename.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_rmdir.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_statfs.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_symlink.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_truncate.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_unlink.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_utimens.c.o
CMakeFiles/fuse_dfs.dir/fuse_impls_write.c.o
CMakeFiles/fuse_dfs.dir/fuse_init.c.o
CMakeFiles/fuse_dfs.dir/fuse_stat_struct.c.o
CMakeFiles/fuse_dfs.dir/fuse_trash.c.o
CMakeFiles/fuse_dfs.dir/fuse_users.c.o  -o fuse_dfs -rdynamic -lfuse
/mnt/jenkins/toolchain/JDK6u20-64bit/jre/lib/amd64/server/libjvm.so
../../../target/usr/local/lib/libhdfs.so.0.0.0 -lm -lpthread
/mnt/jenkins/toolchain/JDK6u20-64bit/jre/lib/amd64/server/libjvm.so
-Wl,-rpath,/mnt/jenkins/toolchain/JDK6u20-64bit/jre/lib/amd64/server:/mnt/jenkins/workspace/Bigtop-trunk-Hadoop/label/opensuse12/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib
23:43:12   [exec] make[3]: Leaving directory
`/mnt/jenkins/workspace/Bigtop-trunk-Hadoop/label/opensuse12/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-hdfs-project/hadoop-hdfs/target/native'
23:43:12   [exec] make[2]: Leaving directory
`/mnt/jenkins/workspace/Bigtop-trunk-Hadoop/label/opensuse12/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-hdfs-project/hadoop-hdfs/target/native'
23:43:12   [exec] make[1]: Leaving directory
`/mnt/jenkins/workspace/Bigtop-trunk-Hadoop/label/opensuse12/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-hdfs-project/hadoop-hdfs/target/native'
23:43:13   [exec]
/usr/lib64/gcc/x86_64-suse-linux/4.7/../../../../x86_64-suse-linux/bin/ld:
CMakeFiles/fuse_dfs.dir/fuse_connect.c.o: undefined reference to
symbol 'clock_gettime@@GLIBC_2.2.5'
23:43:13   [exec]
/usr/lib64/gcc/x86_64-suse-linux/4.7/../../../../x86_64-suse-linux/bin/ld:
note: 'clock_gettime@@GLIBC_2.2.5' is defined in DSO /lib64/librt.so.1
so try adding it to the linker command line
23:43:13   [exec] /lib64/librt.so.1: could not read symbols:
Invalid operation
23:43:13   [exec] collect2: error: ld returned 1 exit status
23:43:13   [exec] make[3]: *** [main/native/fuse-dfs/fuse_dfs] Error 1
23:43:13   [exec] make[2]: ***
[main/native/fuse-dfs/CMakeFiles/fuse_dfs.dir/all] Error 2
23:43:13   [exec] make[1]: *** [all] Error 2
{code}

I haven't personally confirmed the fix yet, but confirmation should be coming 
soon.

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4164.001.patch
>
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492887#comment-13492887
 ] 

Andy Isaacson commented on HDFS-4164:
-

LGTM. +1.  Does this fix a currently broken build, or is it a prerequisite for 
another change?  If broken, please copy+paste the build failure.  If 
prerequisite, please include this change in the patch which needs it.

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4164.001.patch
>
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4139) fuse-dfs RO mode still allows file truncation

2012-11-07 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492885#comment-13492885
 ] 

Andy Isaacson commented on HDFS-4139:
-

The change seems legit. Please update with manual testing results. +1 once 
that's done.

> fuse-dfs RO mode still allows file truncation
> -
>
> Key: HDFS-4139
> URL: https://issues.apache.org/jira/browse/HDFS-4139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4139.001.patch
>
>
> Mounting a fuse-dfs in readonly mode with "-oro" still allows the user to 
> truncate files.
> {noformat}
> $ grep fuse /etc/fstab
> hadoop-fuse-dfs#dfs://ubu-cdh-0.local /export/hdfs fuse noauto,ro 0 0
> $ sudo mount /export/hdfs
> $ hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  4 2012-11-01 14:18 /tmp/blah.txt
> $ echo foo > /export/hdfs/tmp/blah.txt
> -bash: /export/hdfs/tmp/blah.txt: Permission denied
> $  hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  0 2012-11-01 14:28 /tmp/blah.txt
> $ ps ax | grep dfs
> ...
> 13639 ?Ssl0:02 /usr/lib/hadoop/bin/fuse_dfs dfs://ubu-cdh-0.local 
> /export/hdfs -o ro,dev,suid
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4165) Faulty sanity check in FsDirectory.unprotectedSetQuota

2012-11-07 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4165:


Assignee: Binglin Chang
  Status: Patch Available  (was: Open)

> Faulty sanity check in FsDirectory.unprotectedSetQuota
> --
>
> Key: HDFS-4165
> URL: https://issues.apache.org/jira/browse/HDFS-4165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Trivial
> Attachments: HDFS-4165.patch
>
>
> According to the documentation:
> The quota can have three types of values : (1) 0 or more will set 
> the quota to that value, (2) {@link HdfsConstants#QUOTA_DONT_SET}  implies 
> the quota will not be changed, and (3) {@link HdfsConstants#QUOTA_RESET} 
> implies the quota will be reset. Any other value is a runtime error.
> sanity check in FsDirectory.unprotectedSetQuota should use 
> {code}
> nsQuota != HdfsConstants.QUOTA_RESET
> {code}
> rather than
> {code}
> nsQuota < HdfsConstants.QUOTA_RESET
> {code}
> Since HdfsConstants.QUOTA_RESET is defined to be -1, there is not any problem 
> for this code, but it is better to do it right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4165) Faulty sanity check in FsDirectory.unprotectedSetQuota

2012-11-07 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4165:


Attachment: HDFS-4165.patch

> Faulty sanity check in FsDirectory.unprotectedSetQuota
> --
>
> Key: HDFS-4165
> URL: https://issues.apache.org/jira/browse/HDFS-4165
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Trivial
> Attachments: HDFS-4165.patch
>
>
> According to the documentation:
> The quota can have three types of values : (1) 0 or more will set 
> the quota to that value, (2) {@link HdfsConstants#QUOTA_DONT_SET}  implies 
> the quota will not be changed, and (3) {@link HdfsConstants#QUOTA_RESET} 
> implies the quota will be reset. Any other value is a runtime error.
> sanity check in FsDirectory.unprotectedSetQuota should use 
> {code}
> nsQuota != HdfsConstants.QUOTA_RESET
> {code}
> rather than
> {code}
> nsQuota < HdfsConstants.QUOTA_RESET
> {code}
> Since HdfsConstants.QUOTA_RESET is defined to be -1, there is not any problem 
> for this code, but it is better to do it right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492884#comment-13492884
 ] 

Jing Zhao commented on HDFS-4162:
-

The patch looks good while it will be better to format the test code a little 
bit (e.g., remove unnecessary blank lines in the beginning and end, and make 
the line length <= 80). +1 for the patch.

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Assignee: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4165) Faulty sanity check in FsDirectory.unprotectedSetQuota

2012-11-07 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-4165:
---

 Summary: Faulty sanity check in FsDirectory.unprotectedSetQuota
 Key: HDFS-4165
 URL: https://issues.apache.org/jira/browse/HDFS-4165
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Priority: Trivial


According to the documentation:

The quota can have three types of values : (1) 0 or more will set 
the quota to that value, (2) {@link HdfsConstants#QUOTA_DONT_SET}  implies 
the quota will not be changed, and (3) {@link HdfsConstants#QUOTA_RESET} 
implies the quota will be reset. Any other value is a runtime error.

sanity check in FsDirectory.unprotectedSetQuota should use 

{code}
nsQuota != HdfsConstants.QUOTA_RESET
{code}

rather than

{code}
nsQuota < HdfsConstants.QUOTA_RESET
{code}

Since HdfsConstants.QUOTA_RESET is defined to be -1, there is not any problem 
for this code, but it is better to do it right.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4164:
---

Status: Patch Available  (was: Open)

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4164.001.patch
>
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4164:
---

Attachment: HDFS-4164.001.patch

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4164.001.patch
>
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4164) fuse_dfs: add -lrt to the compiler command line on Linux

2012-11-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4164:
---

Summary: fuse_dfs: add -lrt to the compiler command line on Linux  (was: 
fuse_dfs: add -ltr to the compiler command line on Linux)

> fuse_dfs: add -lrt to the compiler command line on Linux
> 
>
> Key: HDFS-4164
> URL: https://issues.apache.org/jira/browse/HDFS-4164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> We need to add -ltr to the compiler command line on Linux in order to use 
> clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4164) fuse_dfs: add -ltr to the compiler command line on Linux

2012-11-07 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4164:
--

 Summary: fuse_dfs: add -ltr to the compiler command line on Linux
 Key: HDFS-4164
 URL: https://issues.apache.org/jira/browse/HDFS-4164
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


We need to add -ltr to the compiler command line on Linux in order to use 
clock_gettime on OpenSuSE 12.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4139) fuse-dfs RO mode still allows file truncation

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492868#comment-13492868
 ] 

Hadoop QA commented on HDFS-4139:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552560/HDFS-4139.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3462//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3462//console

This message is automatically generated.

> fuse-dfs RO mode still allows file truncation
> -
>
> Key: HDFS-4139
> URL: https://issues.apache.org/jira/browse/HDFS-4139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4139.001.patch
>
>
> Mounting a fuse-dfs in readonly mode with "-oro" still allows the user to 
> truncate files.
> {noformat}
> $ grep fuse /etc/fstab
> hadoop-fuse-dfs#dfs://ubu-cdh-0.local /export/hdfs fuse noauto,ro 0 0
> $ sudo mount /export/hdfs
> $ hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  4 2012-11-01 14:18 /tmp/blah.txt
> $ echo foo > /export/hdfs/tmp/blah.txt
> -bash: /export/hdfs/tmp/blah.txt: Permission denied
> $  hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  0 2012-11-01 14:28 /tmp/blah.txt
> $ ps ax | grep dfs
> ...
> 13639 ?Ssl0:02 /usr/lib/hadoop/bin/fuse_dfs dfs://ubu-cdh-0.local 
> /export/hdfs -o ro,dev,suid
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3921) NN will prematurely consider blocks missing when entering active state while still in safe mode

2012-11-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3921:
-

Status: Patch Available  (was: Open)

> NN will prematurely consider blocks missing when entering active state while 
> still in safe mode
> ---
>
> Key: HDFS-3921
> URL: https://issues.apache.org/jira/browse/HDFS-3921
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3921.patch
>
>
> I shut down all the HDFS daemons in an Highly Available (automatic failover) 
> cluster.
> Then I started one NN and it transitioned it to active. No DNs were started, 
> and I saw the red warning link on the NN web UI:
> WARNING : There are 36 missing blocks. Please check the logs or run fsck in 
> order to identify the missing blocks.
> I clicked this to go to the corrupt_files.jsp page, which ran into the 
> following error:
> {noformat}
> HTTP ERROR 500
> Problem accessing /corrupt_files.jsp. Reason:
> Cannot run listCorruptFileBlocks because replication queues have not been 
> initialized.
> Caused by:
> java.io.IOException: Cannot run listCorruptFileBlocks because replication 
> queues have not been initialized.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCorruptFileBlocks(FSNamesystem.java:5035)
>   at 
> org.apache.hadoop.hdfs.server.namenode.corrupt_005ffiles_jsp._jspService(corrupt_005ffiles_jsp.java:78)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1039)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3921) NN will prematurely consider blocks missing when entering active state while still in safe mode

2012-11-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3921:
-

Attachment: HDFS-3921.patch

Here's a patch which addresses the issue by only processing repl queues when 
entering the active state if the NN has already left startup safemode. If it 
hasn't left startup safemode, then we should just enter the active state 
regardless. Repl queues will then be processed later when the NN does 
automatically leave startup safemode once sufficient DNs report.

> NN will prematurely consider blocks missing when entering active state while 
> still in safe mode
> ---
>
> Key: HDFS-3921
> URL: https://issues.apache.org/jira/browse/HDFS-3921
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3921.patch
>
>
> I shut down all the HDFS daemons in an Highly Available (automatic failover) 
> cluster.
> Then I started one NN and it transitioned it to active. No DNs were started, 
> and I saw the red warning link on the NN web UI:
> WARNING : There are 36 missing blocks. Please check the logs or run fsck in 
> order to identify the missing blocks.
> I clicked this to go to the corrupt_files.jsp page, which ran into the 
> following error:
> {noformat}
> HTTP ERROR 500
> Problem accessing /corrupt_files.jsp. Reason:
> Cannot run listCorruptFileBlocks because replication queues have not been 
> initialized.
> Caused by:
> java.io.IOException: Cannot run listCorruptFileBlocks because replication 
> queues have not been initialized.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCorruptFileBlocks(FSNamesystem.java:5035)
>   at 
> org.apache.hadoop.hdfs.server.namenode.corrupt_005ffiles_jsp._jspService(corrupt_005ffiles_jsp.java:78)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1039)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3921) NN will prematurely consider blocks missing when entering active state while still in safe mode

2012-11-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3921:
-

 Target Version/s: 2.0.3-alpha
Affects Version/s: (was: 2.0.1-alpha)
   2.0.2-alpha
 Assignee: Aaron T. Myers
  Summary: NN will prematurely consider blocks missing when 
entering active state while still in safe mode  (was: corrupt files web UI hits 
error: Cannot run listCorruptFileBlocks because replication queues have not 
been initialized.)

I've taken a look into this. The trouble is that the NN is always processing 
its repl queues when it enters the active state even if it's still in startup 
safemode, i.e. not all DNs have reported in. This causes the web UI to 
prematurely indicate that blocks are missing, when in fact the NN should still 
just simply be waiting for more DNs to report in order to exit startup safemode.

> NN will prematurely consider blocks missing when entering active state while 
> still in safe mode
> ---
>
> Key: HDFS-3921
> URL: https://issues.apache.org/jira/browse/HDFS-3921
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
>
> I shut down all the HDFS daemons in an Highly Available (automatic failover) 
> cluster.
> Then I started one NN and it transitioned it to active. No DNs were started, 
> and I saw the red warning link on the NN web UI:
> WARNING : There are 36 missing blocks. Please check the logs or run fsck in 
> order to identify the missing blocks.
> I clicked this to go to the corrupt_files.jsp page, which ran into the 
> following error:
> {noformat}
> HTTP ERROR 500
> Problem accessing /corrupt_files.jsp. Reason:
> Cannot run listCorruptFileBlocks because replication queues have not been 
> initialized.
> Caused by:
> java.io.IOException: Cannot run listCorruptFileBlocks because replication 
> queues have not been initialized.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCorruptFileBlocks(FSNamesystem.java:5035)
>   at 
> org.apache.hadoop.hdfs.server.namenode.corrupt_005ffiles_jsp._jspService(corrupt_005ffiles_jsp.java:78)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1039)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4163) HDFS distribution build fails on Windows

2012-11-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4163:


Attachment: HDFS-4163-branch-trunk-win.patch

The attached patch ports the sh scripting in the distribution build to Python.  
It wasn't possible to use only Maven plugins (like maven-antrun-plugin with a 
 task), because they mishandled permissions and symlinks in the built 
tarballs.

I tested all of the following build variations:

Windows: mvn -Pnative-win -Pdist -Dtar -DskipTests clean package
Mac: mvn -Pdist -Dtar -DskipTests clean package
Ubuntu: mvn -Pnative -Pdist -Dtar -DskipTests clean package
Ubuntu: mvn -Pnative -Pdist -Dtar -Drequire.snappy -Dbundle.snappy 
-Dsnappy.lib=/usr/local/lib -DskipTests clean package

This works on Windows.  Additionally, on Mac and Ubuntu, I compared the built 
tarballs from before and after my changes.  I confirmed that the resulting 
tarballs have exactly the same contents, including permissions and symlinks.


> HDFS distribution build fails on Windows
> 
>
> Key: HDFS-4163
> URL: https://issues.apache.org/jira/browse/HDFS-4163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4163-branch-trunk-win.patch
>
>
> Distribution build relies on sh scripts that do not work on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492835#comment-13492835
 ] 

Hadoop QA commented on HDFS-4162:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552552/HDFS-4162.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3461//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3461//console

This message is automatically generated.

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Assignee: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4163) HDFS distribution build fails on Windows

2012-11-07 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4163:
---

 Summary: HDFS distribution build fails on Windows
 Key: HDFS-4163
 URL: https://issues.apache.org/jira/browse/HDFS-4163
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Distribution build relies on sh scripts that do not work on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4139) fuse-dfs RO mode still allows file truncation

2012-11-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492826#comment-13492826
 ] 

Colin Patrick McCabe commented on HDFS-4139:


Yes, currently we're implementing our own half-baked ro mode.  We should just 
use the kernel for this-- when the mount is in read-only mode, it automatically 
intercepts all write operations before they even get to us.

> fuse-dfs RO mode still allows file truncation
> -
>
> Key: HDFS-4139
> URL: https://issues.apache.org/jira/browse/HDFS-4139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4139.001.patch
>
>
> Mounting a fuse-dfs in readonly mode with "-oro" still allows the user to 
> truncate files.
> {noformat}
> $ grep fuse /etc/fstab
> hadoop-fuse-dfs#dfs://ubu-cdh-0.local /export/hdfs fuse noauto,ro 0 0
> $ sudo mount /export/hdfs
> $ hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  4 2012-11-01 14:18 /tmp/blah.txt
> $ echo foo > /export/hdfs/tmp/blah.txt
> -bash: /export/hdfs/tmp/blah.txt: Permission denied
> $  hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  0 2012-11-01 14:28 /tmp/blah.txt
> $ ps ax | grep dfs
> ...
> 13639 ?Ssl0:02 /usr/lib/hadoop/bin/fuse_dfs dfs://ubu-cdh-0.local 
> /export/hdfs -o ro,dev,suid
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4139) fuse-dfs RO mode still allows file truncation

2012-11-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4139:
---

Status: Patch Available  (was: Open)

> fuse-dfs RO mode still allows file truncation
> -
>
> Key: HDFS-4139
> URL: https://issues.apache.org/jira/browse/HDFS-4139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4139.001.patch
>
>
> Mounting a fuse-dfs in readonly mode with "-oro" still allows the user to 
> truncate files.
> {noformat}
> $ grep fuse /etc/fstab
> hadoop-fuse-dfs#dfs://ubu-cdh-0.local /export/hdfs fuse noauto,ro 0 0
> $ sudo mount /export/hdfs
> $ hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  4 2012-11-01 14:18 /tmp/blah.txt
> $ echo foo > /export/hdfs/tmp/blah.txt
> -bash: /export/hdfs/tmp/blah.txt: Permission denied
> $  hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  0 2012-11-01 14:28 /tmp/blah.txt
> $ ps ax | grep dfs
> ...
> 13639 ?Ssl0:02 /usr/lib/hadoop/bin/fuse_dfs dfs://ubu-cdh-0.local 
> /export/hdfs -o ro,dev,suid
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4139) fuse-dfs RO mode still allows file truncation

2012-11-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4139:
---

Attachment: HDFS-4139.001.patch

> fuse-dfs RO mode still allows file truncation
> -
>
> Key: HDFS-4139
> URL: https://issues.apache.org/jira/browse/HDFS-4139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4139.001.patch
>
>
> Mounting a fuse-dfs in readonly mode with "-oro" still allows the user to 
> truncate files.
> {noformat}
> $ grep fuse /etc/fstab
> hadoop-fuse-dfs#dfs://ubu-cdh-0.local /export/hdfs fuse noauto,ro 0 0
> $ sudo mount /export/hdfs
> $ hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  4 2012-11-01 14:18 /tmp/blah.txt
> $ echo foo > /export/hdfs/tmp/blah.txt
> -bash: /export/hdfs/tmp/blah.txt: Permission denied
> $  hdfs dfs -ls /tmp
> ...
> -rw-r--r--   3 ubuntu hadoop  0 2012-11-01 14:28 /tmp/blah.txt
> $ ps ax | grep dfs
> ...
> 13639 ?Ssl0:02 /usr/lib/hadoop/bin/fuse_dfs dfs://ubu-cdh-0.local 
> /export/hdfs -o ro,dev,suid
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4138:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492809#comment-13492809
 ] 

Kihwal Lee commented on HDFS-4138:
--

Branch-0.23 is okay since it does not have HDFS-3573. The edit log is 
initialized in FSImage's constructor, which means the edit log is initialized 
by the time loadNamespace() returns.

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4157) libhdfs: hdfsTell could be implemented a smarter than it is

2012-11-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492779#comment-13492779
 ] 

Colin Patrick McCabe commented on HDFS-4157:


Unfortunately, there's a pretty heavy overhead making a JNI call, especially 
when you look up the Java function dynamically like we do.  Actually, this is 
one thing that libwebhdfs does better than libhdfs.

> libhdfs: hdfsTell could be implemented a smarter than it is
> ---
>
> Key: HDFS-4157
> URL: https://issues.apache.org/jira/browse/HDFS-4157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> In libhdfs, {{hdfsTell}} currently makes an RPC to the {{DataNode}} to 
> determine the position of the stream.  However, we could cache this 
> information easily, since libhdfs controls access to the stream.  This would 
> avoid the double overhead of JNI and the RPC itself.
> This would be very helpful for {{fuse_dfs}}, since that program calls 
> {{hdfsTell}} before every {{write}} or {{read}} operation.  This can be quite 
> a lot of overhead, since writes may be as small as 4kb (depends on FUSE 
> configuration, kernel version, etc.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4162:
--

Assignee: Derek Dagit

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Assignee: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492768#comment-13492768
 ] 

Suresh Srinivas commented on HDFS-4162:
---

Derek I have added you as a HDFS contributor. Now you can assign HDFS jiras to 
yourself.

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Assignee: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4157) libhdfs: hdfsTell could be implemented a smarter than it is

2012-11-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492763#comment-13492763
 ] 

Todd Lipcon commented on HDFS-4157:
---

bq. In libhdfs, hdfsTell currently makes an RPC to the DataNode to determine 
the position of the stream

Howso? The implementation of {{getPos()}} just returns the cached position in 
DFSInputStream.

> libhdfs: hdfsTell could be implemented a smarter than it is
> ---
>
> Key: HDFS-4157
> URL: https://issues.apache.org/jira/browse/HDFS-4157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> In libhdfs, {{hdfsTell}} currently makes an RPC to the {{DataNode}} to 
> determine the position of the stream.  However, we could cache this 
> information easily, since libhdfs controls access to the stream.  This would 
> avoid the double overhead of JNI and the RPC itself.
> This would be very helpful for {{fuse_dfs}}, since that program calls 
> {{hdfsTell}} before every {{write}} or {{read}} operation.  This can be quite 
> a lot of overhead, since writes may be as small as 4kb (depends on FUSE 
> configuration, kernel version, etc.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Derek Dagit (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492761#comment-13492761
 ] 

Derek Dagit commented on HDFS-4162:
---

Yes, please give me permission to assign this to myself.

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4162:
--

Status: Patch Available  (was: Open)

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4162:
--

Attachment: HDFS-4162-branch-0.23.patch

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4162:
--

Attachment: HDFS-4162.patch

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Priority: Minor
> Attachments: HDFS-4162-branch-0.23.patch, HDFS-4162.patch
>
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492757#comment-13492757
 ] 

Suresh Srinivas commented on HDFS-4162:
---

Derek, if you want to work on this, I can assign this issue to you. I will 
review the patch and commit it.

> Some malformed and unquoted HTML strings are returned from datanode web ui
> --
>
> Key: HDFS-4162
> URL: https://issues.apache.org/jira/browse/HDFS-4162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.4
>Reporter: Derek Dagit
>Priority: Minor
>
> When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
> characters is requested, the resulting error page echos back the input 
> unquoted.
> Example:
> http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000
> Writes an input element as part of the response:
> 
> - The value of the "value" attribute is not quoted. 
> - An = must follow the "id" attribute name.
> - Element "input" should have a closing tag.
> The output should be something like:
> 
> In addition, if one creates a directory:
> hdfs dfs -put '/some/path/to/'
> Then browsing to the parent of directory '' prints unquoted HTML in the 
> directory names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4162) Some malformed and unquoted HTML strings are returned from datanode web ui

2012-11-07 Thread Derek Dagit (JIRA)
Derek Dagit created HDFS-4162:
-

 Summary: Some malformed and unquoted HTML strings are returned 
from datanode web ui
 Key: HDFS-4162
 URL: https://issues.apache.org/jira/browse/HDFS-4162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.4
Reporter: Derek Dagit
Priority: Minor


When browsing to the datanode at /browseDirectory.jsp, if a path with HTML 
characters is requested, the resulting error page echos back the input unquoted.

Example:

http://localhost:50075/browseDirectory.jsp?dir=/&go=go&namenodeInfoPort=50070&nnaddr=localhost%3A9000

Writes an input element as part of the response:



- The value of the "value" attribute is not quoted. 
- An = must follow the "id" attribute name.
- Element "input" should have a closing tag.

The output should be something like:




In addition, if one creates a directory:

hdfs dfs -put '/some/path/to/'

Then browsing to the parent of directory '' prints unquoted HTML in the 
directory names.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4161) HDFS keeps a thread open for every file writer

2012-11-07 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4161:
-

 Summary: HDFS keeps a thread open for every file writer
 Key: HDFS-4161
 URL: https://issues.apache.org/jira/browse/HDFS-4161
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 1.0.0
Reporter: Suresh Srinivas


In 1.0 release DFSClient uses a thread per file writer. In some use cases 
(dynamic partions in hive) that use a large number of file writers a large 
number of threads are created. The file writer thread has the following stack:
{noformat}
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1462)
at java.lang.Thread.run(Thread.java:662)
{noformat}

This problem has been fixed in later releases. This jira will post a 
consolidated patch from various jiras that addresses the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4161) HDFS keeps a thread open for every file writer

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reassigned HDFS-4161:


Assignee: Tsz Wo (Nicholas), SZE

> HDFS keeps a thread open for every file writer
> --
>
> Key: HDFS-4161
> URL: https://issues.apache.org/jira/browse/HDFS-4161
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Assignee: Tsz Wo (Nicholas), SZE
>
> In 1.0 release DFSClient uses a thread per file writer. In some use cases 
> (dynamic partions in hive) that use a large number of file writers a large 
> number of threads are created. The file writer thread has the following stack:
> {noformat}
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1462)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> This problem has been fixed in later releases. This jira will post a 
> consolidated patch from various jiras that addresses the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-11-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492748#comment-13492748
 ] 

Aaron T. Myers commented on HDFS-2264:
--

Hi Jitendra, does the above approach sound OK to you?

> NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
> annotation
> ---
>
> Key: HDFS-2264
> URL: https://issues.apache.org/jira/browse/HDFS-2264
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0
>Reporter: Aaron T. Myers
>Assignee: Harsh J
> Fix For: 0.24.0
>
> Attachments: HDFS-2264.r1.diff
>
>
> The {{@KerberosInfo}} annotation specifies the expected server and client 
> principals for a given protocol in order to look up the correct principal 
> name from the config. The {{NamenodeProtocol}} has the wrong value for the 
> client config key. This wasn't noticed because most setups actually use the 
> same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
> in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
> only manifests itself on secure setups which explicitly specify the NN and 
> 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492736#comment-13492736
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2802:
--

Hi Aaron, thanks for setting up the Jenkins build although it is a little bit 
too early.  I have added Suresh, Jing and myself to the Jenkins email list.

> Support for RW/RO snapshots in HDFS
> ---
>
> Key: HDFS-2802
> URL: https://issues.apache.org/jira/browse/HDFS-2802
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node, name-node
>Reporter: Hari Mankude
>Assignee: Hari Mankude
> Attachments: HDFS-2802.20121101.patch, 
> HDFS-2802-meeting-minutes-121101.txt, HDFSSnapshotsDesign.pdf, snap.patch, 
> snapshot-design.pdf, snapshot-design.tex, snapshot-one-pager.pdf, 
> Snapshots20121018.pdf, Snapshots20121030.pdf
>
>
> Snapshots are point in time images of parts of the filesystem or the entire 
> filesystem. Snapshots can be a read-only or a read-write point in time copy 
> of the filesystem. There are several use cases for snapshots in HDFS. I will 
> post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4150) Update inode in blocksMap when deleting original/snapshot file

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492723#comment-13492723
 ] 

Hudson commented on HDFS-4150:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #4 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/4/])
HDFS-4150.  Update the inode in the block map when a snapshotted file or a 
snapshot file is deleted. Contributed by Jing Zhao (Revision 1406763)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406763
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithLink.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Update inode in blocksMap when deleting original/snapshot file
> --
>
> Key: HDFS-4150
> URL: https://issues.apache.org/jira/browse/HDFS-4150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4150.000.patch, HDFS-4150.001.patch, 
> HDFS-4150.002.patch, HDFS-4150.003.patch, HDFS-4150.004.patch
>
>
> When deleting a file/directory, instead of directly removing all the 
> corresponding blocks, we should update inodes in blocksMap if there are 
> snapshots for them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4159) Rename should fail when the destination dir is snapshottable and has snapshots

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492724#comment-13492724
 ] 

Hudson commented on HDFS-4159:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #4 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/4/])
HDFS-4159. Rename should fail when the destination directory is 
snapshottable and has snapshots. Contributed by Jing Zhao (Revision 1406771)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406771
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java


> Rename should fail when the destination dir is snapshottable and has snapshots
> --
>
> Key: HDFS-4159
> URL: https://issues.apache.org/jira/browse/HDFS-4159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-snapshot-check-rename.patch
>
>
> Similar with the deletion scenario, the rename operation should fail when the 
> destination dir (currently is empty) is snapshottable and has snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-11-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492716#comment-13492716
 ] 

Daryn Sharp commented on HDFS-4108:
---

+1 If Benoy hasn't already made a new patch, I'm fine with another jira.

> In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
> list , gives an error
> -
>
> Key: HDFS-4108
> URL: https://issues.apache.org/jira/browse/HDFS-4108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch
>
>
> This issue happens in secure cluster.
> To reproduce :
> Go to the NameNode WEB UI. (dfshealth.jsp)
> Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
> Click on a datanode to bring up the filesystem  web page ( 
> browsedirectory.jsp)
> The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-11-07 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492660#comment-13492660
 ] 

Owen O'Malley commented on HDFS-4108:
-

Daryn, I think the current version is fine for now and we could create a new 
jira for the follow on.

Otherwise, this looks good to me. +1

> In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
> list , gives an error
> -
>
> Key: HDFS-4108
> URL: https://issues.apache.org/jira/browse/HDFS-4108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, webhdfs
>Affects Versions: 1.1.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch
>
>
> This issue happens in secure cluster.
> To reproduce :
> Go to the NameNode WEB UI. (dfshealth.jsp)
> Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
> Click on a datanode to bring up the filesystem  web page ( 
> browsedirectory.jsp)
> The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4158) FSDirectory#hasSnapshot will NPE when deleting an empty directory

2012-11-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-4158.
--

   Resolution: Duplicate
Fix Version/s: Snapshot (HDFS-2802)

> FSDirectory#hasSnapshot will NPE when deleting an empty directory
> -
>
> Key: HDFS-4158
> URL: https://issues.apache.org/jira/browse/HDFS-4158
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4158.patch
>
>
> The method attempts to iterate over the child list of the directory, but that 
> list may be null and this isn't handled currently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4159) Rename should fail when the destination dir is snapshottable and has snapshots

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4159.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> Rename should fail when the destination dir is snapshottable and has snapshots
> --
>
> Key: HDFS-4159
> URL: https://issues.apache.org/jira/browse/HDFS-4159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-snapshot-check-rename.patch
>
>
> Similar with the deletion scenario, the rename operation should fail when the 
> destination dir (currently is empty) is snapshottable and has snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492634#comment-13492634
 ] 

Konstantin Shvachko commented on HDFS-4138:
---

I committed this to trunk and branch-2.
The same problem exists in branch0.23 but the patch is not applying directly, 
neither two.
Kihwal would like to make a patch for 0.23?

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4159) Rename should fail when the destination dir is snapshottable and has snapshots

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4159:
-

 Component/s: (was: data-node)
Hadoop Flags: Reviewed

+1 patch looks good.

> Rename should fail when the destination dir is snapshottable and has snapshots
> --
>
> Key: HDFS-4159
> URL: https://issues.apache.org/jira/browse/HDFS-4159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-snapshot-check-rename.patch
>
>
> Similar with the deletion scenario, the rename operation should fail when the 
> destination dir (currently is empty) is snapshottable and has snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4150) Update inode in blocksMap when deleting original/snapshot file

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4150.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> Update inode in blocksMap when deleting original/snapshot file
> --
>
> Key: HDFS-4150
> URL: https://issues.apache.org/jira/browse/HDFS-4150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4150.000.patch, HDFS-4150.001.patch, 
> HDFS-4150.002.patch, HDFS-4150.003.patch, HDFS-4150.004.patch
>
>
> When deleting a file/directory, instead of directly removing all the 
> corresponding blocks, we should update inodes in blocksMap if there are 
> snapshots for them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4150) Update inode in blocksMap when deleting original/snapshot file

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4150:
-

 Component/s: (was: data-node)
Hadoop Flags: Reviewed

+1 patch looks good.

> Update inode in blocksMap when deleting original/snapshot file
> --
>
> Key: HDFS-4150
> URL: https://issues.apache.org/jira/browse/HDFS-4150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4150.000.patch, HDFS-4150.001.patch, 
> HDFS-4150.002.patch, HDFS-4150.003.patch, HDFS-4150.004.patch
>
>
> When deleting a file/directory, instead of directly removing all the 
> corresponding blocks, we should update inodes in blocksMap if there are 
> snapshots for them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3625) Fix TestBackupNode by properly initializing edit log

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492555#comment-13492555
 ] 

Hudson commented on HDFS-3625:
--

Integrated in Hadoop-trunk-Commit #2974 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2974/])
Move HDFS-3625 under 0.23.5 (Revision 1406739)

 Result = SUCCESS
shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406739
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestBackupNode by properly initializing edit log
> 
>
> Key: HDFS-3625
> URL: https://issues.apache.org/jira/browse/HDFS-3625
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Junping Du
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-3625.patch, HDFS-3625.patch
>
>
> TestBackupNode#testCheckpointNode fails because the following code in 
> FSN#startActiveServices NPEs (resulting in a System.exit) because 
> editLogTailer is set when starting standby services and if ha is not enabled 
> we go directly to the active state. Looks like it should be wrapped with an 
> haEnabled check.
> {code}
> LOG.info("Catching up to latest edits from old active before " +
>"taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492543#comment-13492543
 ] 

Hudson commented on HDFS-4138:
--

Integrated in Hadoop-trunk-Commit #2973 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2973/])
HDFS-4138. BackupNode startup fails due to uninitialized edit log. 
Contributed by Kihwal Lee. (Revision 1406734)

 Result = SUCCESS
shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406734
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492519#comment-13492519
 ] 

Konstantin Shvachko commented on HDFS-4138:
---

+1. Looks good.
Targeting this together with the dependent issue for 2.0 and 0.23, if there are 
no other opinions.

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492507#comment-13492507
 ] 

Hadoop QA commented on HDFS-4138:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552476/hdfs-4138.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3460//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3460//console

This message is automatically generated.

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4138) BackupNode startup fails due to uninitialized edit log

2012-11-07 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4138:
-

Attachment: hdfs-4138.patch

Here is a new patch that sets the block pool id, less initialization changes.

> BackupNode startup fails due to uninitialized edit log
> --
>
> Key: HDFS-4138
> URL: https://issues.apache.org/jira/browse/HDFS-4138
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.3-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: hdfs-4138.patch, hdfs-4138.patch, hdfs-4138.patch, 
> hdfs-4138.patch
>
>
> It was notices by TestBackupNode.testCheckpointNode failure. When a backup 
> node is getting started, it tries to enter active state and start common 
> services. But when it fails to start services and exits, which is caught by 
> the exit util.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync semantics

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492363#comment-13492363
 ] 

Hudson commented on HDFS-3979:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-3979. For hsync, datanode should wait for the local sync to complete 
before sending ack. Contributed by Lars Hofhansl (Revision 1406382)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406382
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Fix hsync semantics
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.2-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt, 
> hdfs-3979-v3.txt, hdfs-3979-v4.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4155) libhdfs implementation of hsync API

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492362#comment-13492362
 ] 

Hudson commented on HDFS-4155:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-4155. libhdfs implementation of hsync API. Contributed by Liang Xie. 
(Revision 1406372)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406372
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c


> libhdfs implementation of hsync API
> ---
>
> Key: HDFS-4155
> URL: https://issues.apache.org/jira/browse/HDFS-4155
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-4155.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1331) dfs -test should work like /bin/test

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492358#comment-13492358
 ] 

Hudson commented on HDFS-1331:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-1331. dfs -test should work like /bin/test (Andy Isaacson via daryn) 
(Revision 1406198)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406198
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> dfs -test should work like /bin/test
> 
>
> Key: HDFS-1331
> URL: https://issues.apache.org/jira/browse/HDFS-1331
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.20.2, 3.0.0, 2.0.2-alpha
>Reporter: Allen Wittenauer
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs1331-2.txt, hdfs1331-3.txt, hdfs1331-4.txt, 
> hdfs1331.txt, hdfs1331-with-hadoop8994.txt
>
>
> hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
> to actually use if you are used to the real test command:
> hadoop:
> $hadoop dfs -test -d /nonexist; echo $?
> test: File does not exist: /nonexist
> 255
> shell:
> $ test -d /nonexist; echo $?
> 1
> a) Why is it spitting out a message? Even so, why is it saying file instead 
> of directory when I used -d?
> b) Why is the return code 255? I realize this is documented as '0' if true.  
> But docs basically say the value is undefined if it isn't.
> c) where is -f?
> d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4152) Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492355#comment-13492355
 ] 

Hudson commented on HDFS-4152:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-4152. Add a new class BlocksMapUpdateInfo for the parameter in 
INode.collectSubtreeBlocksAndClear(..). Contributed by Jing Zhao (Revision 
1406326)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406326
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java


> Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)
> ---
>
> Key: HDFS-4152
> URL: https://issues.apache.org/jira/browse/HDFS-4152
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4152.001.patch
>
>
> INode.collectSubtreeBlocksAndClear(..) currently uses a list to collect 
> blocks for deletion.  It cannot be extended to support other operation like 
> updating the block-map.  We propose to add a new class to encapsulate the 
> abstraction. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492356#comment-13492356
 ] 

Hudson commented on HDFS-4075:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-4075. Reduce recommissioning overhead (Kihwal Lee via daryn) (Revision 
1406278)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406278
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Reduce recommissioning overhead
> ---
>
> Key: HDFS-4075
> URL: https://issues.apache.org/jira/browse/HDFS-4075
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.5
>
> Attachments: hdfs-4075.patch, hdfs-4075.patch, hdfs-4075.patch
>
>
> When datanodes are recommissioned, 
> {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
> rejoined node and excess blocks are added to the invalidate list. The problem 
> is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4153) Add START_MSG/SHUTDOWN_MSG for JournalNode

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492357#comment-13492357
 ] 

Hudson commented on HDFS-4153:
--

Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HDFS-4153. Add START_MSG/SHUTDOWN_MSG for JournalNode. Contributed by liang 
xie. (Revision 1406473)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406473
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Add START_MSG/SHUTDOWN_MSG for JournalNode
> --
>
> Key: HDFS-4153
> URL: https://issues.apache.org/jira/browse/HDFS-4153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0
>
> Attachments: HDFS-4153.txt
>
>
> Currently, there's no startup/shutdown msg log in jn, it somehow makes a 
> little difficult to digniose issue, we'd better do the same behavior like 
> dn/nn/...
> This tiny patch passed those test cases : 
> -Dtest=org.apache.hadoop.hdfs.qjournal.server.*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync semantics

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492321#comment-13492321
 ] 

Hudson commented on HDFS-3979:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-3979. For hsync, datanode should wait for the local sync to complete 
before sending ack. Contributed by Lars Hofhansl (Revision 1406382)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406382
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Fix hsync semantics
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.2-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt, 
> hdfs-3979-v3.txt, hdfs-3979-v4.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4155) libhdfs implementation of hsync API

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492320#comment-13492320
 ] 

Hudson commented on HDFS-4155:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-4155. libhdfs implementation of hsync API. Contributed by Liang Xie. 
(Revision 1406372)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406372
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c


> libhdfs implementation of hsync API
> ---
>
> Key: HDFS-4155
> URL: https://issues.apache.org/jira/browse/HDFS-4155
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-4155.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4153) Add START_MSG/SHUTDOWN_MSG for JournalNode

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492315#comment-13492315
 ] 

Hudson commented on HDFS-4153:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-4153. Add START_MSG/SHUTDOWN_MSG for JournalNode. Contributed by liang 
xie. (Revision 1406473)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406473
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Add START_MSG/SHUTDOWN_MSG for JournalNode
> --
>
> Key: HDFS-4153
> URL: https://issues.apache.org/jira/browse/HDFS-4153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0
>
> Attachments: HDFS-4153.txt
>
>
> Currently, there's no startup/shutdown msg log in jn, it somehow makes a 
> little difficult to digniose issue, we'd better do the same behavior like 
> dn/nn/...
> This tiny patch passed those test cases : 
> -Dtest=org.apache.hadoop.hdfs.qjournal.server.*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1331) dfs -test should work like /bin/test

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492316#comment-13492316
 ] 

Hudson commented on HDFS-1331:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-1331. dfs -test should work like /bin/test (Andy Isaacson via daryn) 
(Revision 1406198)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406198
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> dfs -test should work like /bin/test
> 
>
> Key: HDFS-1331
> URL: https://issues.apache.org/jira/browse/HDFS-1331
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.20.2, 3.0.0, 2.0.2-alpha
>Reporter: Allen Wittenauer
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs1331-2.txt, hdfs1331-3.txt, hdfs1331-4.txt, 
> hdfs1331.txt, hdfs1331-with-hadoop8994.txt
>
>
> hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
> to actually use if you are used to the real test command:
> hadoop:
> $hadoop dfs -test -d /nonexist; echo $?
> test: File does not exist: /nonexist
> 255
> shell:
> $ test -d /nonexist; echo $?
> 1
> a) Why is it spitting out a message? Even so, why is it saying file instead 
> of directory when I used -d?
> b) Why is the return code 255? I realize this is documented as '0' if true.  
> But docs basically say the value is undefined if it isn't.
> c) where is -f?
> d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4152) Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492313#comment-13492313
 ] 

Hudson commented on HDFS-4152:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-4152. Add a new class BlocksMapUpdateInfo for the parameter in 
INode.collectSubtreeBlocksAndClear(..). Contributed by Jing Zhao (Revision 
1406326)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406326
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java


> Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)
> ---
>
> Key: HDFS-4152
> URL: https://issues.apache.org/jira/browse/HDFS-4152
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4152.001.patch
>
>
> INode.collectSubtreeBlocksAndClear(..) currently uses a list to collect 
> blocks for deletion.  It cannot be extended to support other operation like 
> updating the block-map.  We propose to add a new class to encapsulate the 
> abstraction. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492314#comment-13492314
 ] 

Hudson commented on HDFS-4075:
--

Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HDFS-4075. Reduce recommissioning overhead (Kihwal Lee via daryn) (Revision 
1406278)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406278
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Reduce recommissioning overhead
> ---
>
> Key: HDFS-4075
> URL: https://issues.apache.org/jira/browse/HDFS-4075
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.5
>
> Attachments: hdfs-4075.patch, hdfs-4075.patch, hdfs-4075.patch
>
>
> When datanodes are recommissioned, 
> {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
> rejoined node and excess blocks are added to the invalidate list. The problem 
> is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492298#comment-13492298
 ] 

Hudson commented on HDFS-4075:
--

Integrated in Hadoop-Hdfs-0.23-Build #428 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/428/])
svn merge -c 1406278 FIXES: HDFS-4075. Reduce recommissioning overhead 
(Kihwal Lee via daryn) (Revision 1406290)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406290
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Reduce recommissioning overhead
> ---
>
> Key: HDFS-4075
> URL: https://issues.apache.org/jira/browse/HDFS-4075
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.5
>
> Attachments: hdfs-4075.patch, hdfs-4075.patch, hdfs-4075.patch
>
>
> When datanodes are recommissioned, 
> {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
> rejoined node and excess blocks are added to the invalidate list. The problem 
> is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync semantics

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492260#comment-13492260
 ] 

Hudson commented on HDFS-3979:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-3979. For hsync, datanode should wait for the local sync to complete 
before sending ack. Contributed by Lars Hofhansl (Revision 1406382)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406382
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Fix hsync semantics
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.2-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt, 
> hdfs-3979-v3.txt, hdfs-3979-v4.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4155) libhdfs implementation of hsync API

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492259#comment-13492259
 ] 

Hudson commented on HDFS-4155:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-4155. libhdfs implementation of hsync API. Contributed by Liang Xie. 
(Revision 1406372)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406372
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c


> libhdfs implementation of hsync API
> ---
>
> Key: HDFS-4155
> URL: https://issues.apache.org/jira/browse/HDFS-4155
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-4155.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1331) dfs -test should work like /bin/test

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492255#comment-13492255
 ] 

Hudson commented on HDFS-1331:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-1331. dfs -test should work like /bin/test (Andy Isaacson via daryn) 
(Revision 1406198)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406198
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> dfs -test should work like /bin/test
> 
>
> Key: HDFS-1331
> URL: https://issues.apache.org/jira/browse/HDFS-1331
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.20.2, 3.0.0, 2.0.2-alpha
>Reporter: Allen Wittenauer
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs1331-2.txt, hdfs1331-3.txt, hdfs1331-4.txt, 
> hdfs1331.txt, hdfs1331-with-hadoop8994.txt
>
>
> hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
> to actually use if you are used to the real test command:
> hadoop:
> $hadoop dfs -test -d /nonexist; echo $?
> test: File does not exist: /nonexist
> 255
> shell:
> $ test -d /nonexist; echo $?
> 1
> a) Why is it spitting out a message? Even so, why is it saying file instead 
> of directory when I used -d?
> b) Why is the return code 255? I realize this is documented as '0' if true.  
> But docs basically say the value is undefined if it isn't.
> c) where is -f?
> d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4152) Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492252#comment-13492252
 ] 

Hudson commented on HDFS-4152:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-4152. Add a new class BlocksMapUpdateInfo for the parameter in 
INode.collectSubtreeBlocksAndClear(..). Contributed by Jing Zhao (Revision 
1406326)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406326
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java


> Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)
> ---
>
> Key: HDFS-4152
> URL: https://issues.apache.org/jira/browse/HDFS-4152
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4152.001.patch
>
>
> INode.collectSubtreeBlocksAndClear(..) currently uses a list to collect 
> blocks for deletion.  It cannot be extended to support other operation like 
> updating the block-map.  We propose to add a new class to encapsulate the 
> abstraction. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492253#comment-13492253
 ] 

Hudson commented on HDFS-4075:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-4075. Reduce recommissioning overhead (Kihwal Lee via daryn) (Revision 
1406278)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406278
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Reduce recommissioning overhead
> ---
>
> Key: HDFS-4075
> URL: https://issues.apache.org/jira/browse/HDFS-4075
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.5
>
> Attachments: hdfs-4075.patch, hdfs-4075.patch, hdfs-4075.patch
>
>
> When datanodes are recommissioned, 
> {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
> rejoined node and excess blocks are added to the invalidate list. The problem 
> is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4153) Add START_MSG/SHUTDOWN_MSG for JournalNode

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13492254#comment-13492254
 ] 

Hudson commented on HDFS-4153:
--

Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HDFS-4153. Add START_MSG/SHUTDOWN_MSG for JournalNode. Contributed by liang 
xie. (Revision 1406473)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1406473
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java


> Add START_MSG/SHUTDOWN_MSG for JournalNode
> --
>
> Key: HDFS-4153
> URL: https://issues.apache.org/jira/browse/HDFS-4153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: journal-node
>Affects Versions: 3.0.0
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 3.0.0
>
> Attachments: HDFS-4153.txt
>
>
> Currently, there's no startup/shutdown msg log in jn, it somehow makes a 
> little difficult to digniose issue, we'd better do the same behavior like 
> dn/nn/...
> This tiny patch passed those test cases : 
> -Dtest=org.apache.hadoop.hdfs.qjournal.server.*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira