[jira] Updated: (HDFS-1033) In secure clusters, NN and SNN should verify that the remote principal during image and edits transfer

2010-03-10 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1033:
--

Summary: In secure clusters, NN and SNN should verify that the remote 
principal during image and edits transfer  (was: In securre clusters, NN and 
SNN should verify that the remote principal during image and edits transfer)

 In secure clusters, NN and SNN should verify that the remote principal during 
 image and edits transfer
 --

 Key: HDFS-1033
 URL: https://issues.apache.org/jira/browse/HDFS-1033
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Reporter: Jakob Homan
Assignee: Jakob Homan

 Currently anyone can connect and download image/edits from Namenode.  In a 
 secure cluster we can verify the identity of the principal making the 
 request; we should disallow requests from anyone except the NN and SNN 
 principals (and their hosts due to the lousy KerbSSL limitation).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1033) In secure clusters, NN and SNN should verify that the remote principal during image and edits transfer

2010-03-10 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1033:
--

Attachment: HDFS-1033-Y20.patch

Y! 20 patch, not for commit.  Trunk patch soon...

 In secure clusters, NN and SNN should verify that the remote principal during 
 image and edits transfer
 --

 Key: HDFS-1033
 URL: https://issues.apache.org/jira/browse/HDFS-1033
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1033-Y20.patch


 Currently anyone can connect and download image/edits from Namenode.  In a 
 secure cluster we can verify the identity of the principal making the 
 request; we should disallow requests from anyone except the NN and SNN 
 principals (and their hosts due to the lousy KerbSSL limitation).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1023) Allow http server to start as regular principal if https principal not defined.

2010-03-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1023:
--

Attachment: HDFS-1023-Y20-Update-2.patch

Update to HDFS-1023 patch.

 Allow http server to start as regular principal if https principal not 
 defined.
 ---

 Key: HDFS-1023
 URL: https://issues.apache.org/jira/browse/HDFS-1023
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HADOOP-1023-Y20-1.patch, HDFS-1023-Y20-Update-2.patch, 
 HDFS-1023-Y20-Update.patch, HDFS-1023-Y20.patch


 Currently limitations in Sun's KerbSSL implementation require the https 
 server to be run as host/[machi...@realm. and another Sun KerbSSL 
 limitation appears to require you to store all principals in the same keytab, 
 meaning fully functional, secured Namenodes require combined keytabs.  
 However, it may be that one wishes to run a namenode without a secondary 
 namenode or other utilities that require https.  In this case, we should 
 allow the http server to start and log a warning that it will not be able to 
 accept https connections.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1023) Allow http server to start as regular principal if https principal not defined.

2010-03-04 Thread Jakob Homan (JIRA)
Allow http server to start as regular principal if https principal not defined.
---

 Key: HDFS-1023
 URL: https://issues.apache.org/jira/browse/HDFS-1023
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan


Currently limitations in Sun's KerbSSL implementation require the https server 
to be run as host/[machi...@realm. and another Sun KerbSSL limitation appears 
to require you to store all principals in the same keytab, meaning fully 
functional, secured Namenodes require combined keytabs.  However, it may be 
that one wishes to run a namenode without a secondary namenode or other 
utilities that require https.  In this case, we should allow the http server to 
start and log a warning that it will not be able to accept https connections.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1023) Allow http server to start as regular principal if https principal not defined.

2010-03-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1023:
--

Attachment: HDFS-1023-Y20.patch

Patch implementing above, plus some better logging and extra security check, 
for Y20 distro.  Trunk patch soon.  Unit tests not applicable... sigh.

 Allow http server to start as regular principal if https principal not 
 defined.
 ---

 Key: HDFS-1023
 URL: https://issues.apache.org/jira/browse/HDFS-1023
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1023-Y20.patch


 Currently limitations in Sun's KerbSSL implementation require the https 
 server to be run as host/[machi...@realm. and another Sun KerbSSL 
 limitation appears to require you to store all principals in the same keytab, 
 meaning fully functional, secured Namenodes require combined keytabs.  
 However, it may be that one wishes to run a namenode without a secondary 
 namenode or other utilities that require https.  In this case, we should 
 allow the http server to start and log a warning that it will not be able to 
 accept https connections.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1023) Allow http server to start as regular principal if https principal not defined.

2010-03-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12841651#action_12841651
 ] 

Jakob Homan commented on HDFS-1023:
---

{quote}It is pretty amazing/disappointing that the normal HTTP/[machine] 
doesn't work. {quote}
I was pretty amazed at this too.  Definitely complicates deploying a secure 
cluster, although only the NN and SNN need to have these combined keytabs, 
since they are the only https servers.
Line 299: 
http://hg.openjdk.java.net/jdk7/tl/jdk/file/893034df4ec2/src/share/classes/sun/security/ssl/krb5/KerberosClientKeyExchangeImpl.java

 Allow http server to start as regular principal if https principal not 
 defined.
 ---

 Key: HDFS-1023
 URL: https://issues.apache.org/jira/browse/HDFS-1023
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1023-Y20.patch


 Currently limitations in Sun's KerbSSL implementation require the https 
 server to be run as host/[machi...@realm. and another Sun KerbSSL 
 limitation appears to require you to store all principals in the same keytab, 
 meaning fully functional, secured Namenodes require combined keytabs.  
 However, it may be that one wishes to run a namenode without a secondary 
 namenode or other utilities that require https.  In this case, we should 
 allow the http server to start and log a warning that it will not be able to 
 accept https connections.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1023) Allow http server to start as regular principal if https principal not defined.

2010-03-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1023:
--

Attachment: HADOOP-1023-Y20-1.patch

Small update to avoid Findbugs warning.  

 Allow http server to start as regular principal if https principal not 
 defined.
 ---

 Key: HDFS-1023
 URL: https://issues.apache.org/jira/browse/HDFS-1023
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HADOOP-1023-Y20-1.patch, HDFS-1023-Y20.patch


 Currently limitations in Sun's KerbSSL implementation require the https 
 server to be run as host/[machi...@realm. and another Sun KerbSSL 
 limitation appears to require you to store all principals in the same keytab, 
 meaning fully functional, secured Namenodes require combined keytabs.  
 However, it may be that one wishes to run a namenode without a secondary 
 namenode or other utilities that require https.  In this case, we should 
 allow the http server to start and log a warning that it will not be able to 
 accept https connections.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1019) Incorrect default values for delegation tokens in hdfs-default.xml

2010-03-03 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840906#action_12840906
 ] 

Jakob Homan commented on HDFS-1019:
---

Maybe it's good to provide units in the description field?

 Incorrect default values for delegation tokens in hdfs-default.xml
 --

 Key: HDFS-1019
 URL: https://issues.apache.org/jira/browse/HDFS-1019
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1019.1.patch


  The default values for delegation token parameters in hdfs-default.xml are 
 incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1019) Incorrect default values for delegation tokens in hdfs-default.xml

2010-03-03 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12840907#action_12840907
 ] 

Jakob Homan commented on HDFS-1019:
---

or even in the key-name itself...

 Incorrect default values for delegation tokens in hdfs-default.xml
 --

 Key: HDFS-1019
 URL: https://issues.apache.org/jira/browse/HDFS-1019
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1019.1.patch


  The default values for delegation token parameters in hdfs-default.xml are 
 incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1019) Incorrect default values for delegation tokens in hdfs-default.xml

2010-03-03 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12841041#action_12841041
 ] 

Jakob Homan commented on HDFS-1019:
---

I really would push for having the units in the key name itself, since the xml 
description isn't available in code...

 Incorrect default values for delegation tokens in hdfs-default.xml
 --

 Key: HDFS-1019
 URL: https://issues.apache.org/jira/browse/HDFS-1019
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HDFS-1019-y20.1.patch, HDFS-1019.1.patch


  The default values for delegation token parameters in hdfs-default.xml are 
 incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1017) browsedfs jsp should call JspHelper.getUGI rather than using createRemoteUser()

2010-03-02 Thread Jakob Homan (JIRA)
browsedfs jsp should call JspHelper.getUGI rather than using createRemoteUser()
---

 Key: HDFS-1017
 URL: https://issues.apache.org/jira/browse/HDFS-1017
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Reporter: Jakob Homan
Assignee: Jakob Homan


Currently the JSP for browsing the file system calls getRemoteUser(), which 
doesn't correctly auth the user on the web, causing failures when trying to 
browse the web.  It should call the utility method JspHelper.getUGI instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1017) browsedfs jsp should call JspHelper.getUGI rather than using createRemoteUser()

2010-03-02 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1017:
--

Attachment: HDFS-1017-Y20.patch

Patch for Y20 distribution, not to be committed.  Trunk to follow soon.  Before 
patch, exception with servlet not recognizing user, after patch succesfsully 
execute servlet with delegation token.

 browsedfs jsp should call JspHelper.getUGI rather than using 
 createRemoteUser()
 ---

 Key: HDFS-1017
 URL: https://issues.apache.org/jira/browse/HDFS-1017
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1017-Y20.patch


 Currently the JSP for browsing the file system calls getRemoteUser(), which 
 doesn't correctly auth the user on the web, causing failures when trying to 
 browse the web.  It should call the utility method JspHelper.getUGI instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1017) browsedfs jsp should call JspHelper.getUGI rather than using createRemoteUser()

2010-03-02 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1017:
--

Attachment: HDFS-1017-Y20-2.patch

Had forgotten to commit changes for last patch.  New file.  

 browsedfs jsp should call JspHelper.getUGI rather than using 
 createRemoteUser()
 ---

 Key: HDFS-1017
 URL: https://issues.apache.org/jira/browse/HDFS-1017
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1017-Y20-2.patch, HDFS-1017-Y20.patch


 Currently the JSP for browsing the file system calls getRemoteUser(), which 
 doesn't correctly auth the user on the web, causing failures when trying to 
 browse the web.  It should call the utility method JspHelper.getUGI instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1006) getImage/putImage http requests should be https for the case of security enabled.

2010-02-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1006:
--

Attachment: HDFS-1006-Y20.patch

New patch for Y20 branch, not to be committed.

 getImage/putImage http requests should be https for the case of security 
 enabled.
 -

 Key: HDFS-1006
 URL: https://issues.apache.org/jira/browse/HDFS-1006
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HDFS-1006-BP20.patch, HDFS-1006-Y20.patch


 should use https:// and port 50475

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1004) Update NN to support Kerberized SSL from HADOOP-6584

2010-02-26 Thread Jakob Homan (JIRA)
Update NN to support Kerberized SSL from HADOOP-6584


 Key: HDFS-1004
 URL: https://issues.apache.org/jira/browse/HDFS-1004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Jakob Homan
Assignee: Jakob Homan


Namenode needs to be tweaked to use the new kerberized-back ssl connector.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1004) Update NN to support Kerberized SSL from HADOOP-6584

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1004:
--

Attachment: HDFS-1004.patch

Patch adds yet another doAs block and fixes logic flaw in getUgi() for 
DfsServlet.  Unfortunately, not unit-testable.

 Update NN to support Kerberized SSL from HADOOP-6584
 

 Key: HDFS-1004
 URL: https://issues.apache.org/jira/browse/HDFS-1004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1004.patch


 Namenode needs to be tweaked to use the new kerberized-back ssl connector.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Attachment: HDFS-994-3.patch

Updated patch verified in Kerberos environment.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Open  (was: Patch Available)

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Patch Available  (was: Open)

Submitting patch again, hopefully Hudson shows up?

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1004) Update NN to support Kerberized SSL from HADOOP-6584

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-1004:
--

Status: Patch Available  (was: Open)

 Update NN to support Kerberized SSL from HADOOP-6584
 

 Key: HDFS-1004
 URL: https://issues.apache.org/jira/browse/HDFS-1004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-1004.patch


 Namenode needs to be tweaked to use the new kerberized-back ssl connector.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12838998#action_12838998
 ] 

Jakob Homan commented on HDFS-994:
--

Failed test is known-bad cactus download.  Patch is ready for review.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Attachment: HDFS-994-4.patch

Thanks for the review Nicholas. Updated patch to include all suggestions.  

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Open  (was: Patch Available)

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Patch Available  (was: Open)

Paging Hudson for new review.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Patch Available  (was: Open)

submitting patch.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994-5.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Attachment: HDFS-994-5.patch

HDFS-991 caused this to go stale. Updated to use a configuration.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994-5.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Open  (was: Patch Available)

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994-3.patch, HDFS-994-4.patch, 
 HDFS-994-5.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-999) Secondary namenode should login using kerberos if security is configured

2010-02-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-999:
-

Hadoop Flags: [Reviewed]

+1 on patch. At the moment, we're using the same key for the NN and 2ndNN, but 
this may change at some point in the future. Do we want to save ourselves some 
work in the future and just create a second key/value pair for the 2ndNN now? 
It can certainly point to the same values as the NN now.

 Secondary namenode should login using kerberos if security is configured
 

 Key: HDFS-999
 URL: https://issues.apache.org/jira/browse/HDFS-999
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HDFS-999.patch


 Right now, if NameNode is configured to use Kerberos, SecondaryNameNode will 
 fail to start.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Attachment: HDFS-994-2.patch

Updated patch now with test-passing goodness.  Not sure where Hudson is, so ran 
full unit tests manually. All pass.  Test-patch is good too.
{noformat}[exec] +1 overall.  
[exec] 
[exec] +1 @author.  The patch does not contain any @author tags.
[exec] 
[exec] +1 tests included.  The patch appears to include 2 new or modified 
tests.
[exec] 
[exec] +1 javadoc.  The javadoc tool did not generate any warning messages.
[exec] 
[exec] +1 javac.  The applied patch does not increase the total number of 
javac compiler warnings.
[exec] 
[exec] +1 findbugs.  The patch does not introduce any new Findbugs warnings.
[exec] 
[exec] +1 release audit.  The applied patch does not increase the total 
number of release audit warnings.
{noformat}

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994-2.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-23 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Status: Patch Available  (was: Open)

submitting patch.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-02-23 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-994:
-

Attachment: HDFS-994.patch

Patch for review.  Manually tested in secure environment, which works fine 
except for the webservice interface, since we don't yet have kerberos-authed 
webinteraction.  However, did fail in the correct way.

 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-963) TestDelegationToken unit test throws error

2010-02-09 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12831724#action_12831724
 ] 

Jakob Homan commented on HDFS-963:
--

Closing as duplicate of HDFS-965.

 TestDelegationToken unit test throws error
 --

 Key: HDFS-963
 URL: https://issues.apache.org/jira/browse/HDFS-963
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: gary murry

 TestDelegationToken is throwing the following error on the current build: 
 Error Message
 User: RealUser is not allowed to impersonate proxyUser
 http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Hdfs-trunk/223/testReport/
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (HDFS-830) change build.xml to look at lib's jars before ivy, to allow overwriting ivy's libraries.

2010-02-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reopened HDFS-830:
--


I'm going to go ahead and re-open this: we've been using the resolvers:internal 
method for a while and, like I had I feared, it's a pain to keep straight which 
version is installed and when it is getting called.  Also, as noted above, 
there was no public discussion on this approach before it was added to the 
wiki.  

My preference would be a new option, something like, 
-Dadditional.jars=foo.jar, which would add those jars to the classpath before 
the other entries.  This would make it easy to automate upstream testing, 
building a patched common jar and then passing it to hdfs to be tested against 
(and so on for MR).  In any case, with some many patches flying around, locally 
installing temporary jars is not a good solution.

 change build.xml to look at lib's jars before ivy, to allow overwriting ivy's 
 libraries.
 

 Key: HDFS-830
 URL: https://issues.apache.org/jira/browse/HDFS-830
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Boris Shkolnik
 Attachments: HDFS-830.patch


 Currently build.xml looks first into ivy's locations ,before picking up jars 
 from lib directory.
 We need to change that to allow overwriting ivy's libs with local ones, by 
 putting them into lib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-02-05 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12830284#action_12830284
 ] 

Jakob Homan commented on HDFS-938:
--

For the 938 backport, looks like you got all the references in HDFS.  Since 
this patch is being backported in three pieces, rather than the usual one, I 
have a question as to whether it's correct that 
org/apache/hadoop/security/TestGroupMappingServiceRefresh.java is being patched 
here?

 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: contrib.ivy.jackson.patch, contrib.ivy.jackson.patch-1, 
 contrib.ivy.jackson.patch-1, contrib.ivy.jackson.patch-3, 
 HDFS-938-BP20-1.patch, HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-954) There are two security packages in hdfs, should be one

2010-02-05 Thread Jakob Homan (JIRA)
There are two security packages in hdfs, should be one
--

 Key: HDFS-954
 URL: https://issues.apache.org/jira/browse/HDFS-954
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan


Currently the test source tree has both
src/test/hdfs/org/apache/hadoop/hdfs/security with:
SecurityTestUtil.java
TestAccessToken.java
TestClientProtocolWithDelegationToken.java

and 
src/test/hdfs/org/apache/hadoop/security with:
TestDelegationToken.java
TestGroupMappingServiceRefresh.java
TestPermission.java

These should be combined into one package and possibly some things moved to 
common.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-02-05 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12830350#action_12830350
 ] 

Jakob Homan commented on HDFS-938:
--

+1

 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: contrib.ivy.jackson.patch, contrib.ivy.jackson.patch-1, 
 contrib.ivy.jackson.patch-1, contrib.ivy.jackson.patch-3, 
 HDFS-938-BP20-1.patch, HDFS-938-BP20-2.patch, HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-950) Add concat to FsShell

2010-02-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12829843#action_12829843
 ] 

Jakob Homan commented on HDFS-950:
--

I share Owen's concern. The bigger question is that how to expose specific 
capabilities of the implementing filesystems to the command line.  In this 
case, no other file system supports this operation as it is defined and 
implemented.  It may not be a good idea to provide this ability on the general 
interface to the dfs.

 Add concat to FsShell
 -

 Key: HDFS-950
 URL: https://issues.apache.org/jira/browse/HDFS-950
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Eli Collins

 Would be nice if concat (HDFS-222) was exposed up to FsShell so users don't 
 have to use hadoop jar.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-02-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12829954#action_12829954
 ] 

Jakob Homan commented on HDFS-938:
--

For the contrib patch, the eclipse classpath hasn't been updated, leading to 
test-patch -1s.

 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: contrib.ivy.jackson.patch, HDFS-938-BP20-1.patch, 
 HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-02-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12828740#action_12828740
 ] 

Jakob Homan commented on HDFS-938:
--

I can't reproduce the test failures:
{noformat}Testsuite: org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 44.085 sec
---
Testsuite: org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 17.772 sec{noformat}
Contrib test failure is known bad cactus issue.
Going to commit.

 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-02-01 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-938:
-

Status: Patch Available  (was: Open)

submitting patch.

 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-936) TestProxyUtil failing test-patch builds

2010-01-29 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-936:


Assignee: Jakob Homan

 TestProxyUtil failing test-patch builds
 ---

 Key: HDFS-936
 URL: https://issues.apache.org/jira/browse/HDFS-936
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy, test
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Jakob Homan

 TestProxy has been consistently failing HDFS test-patch builds the last day 
 or two:
 junit.framework.AssertionFailedError: null
   at 
 org.apache.hadoop.hdfsproxy.TestProxyUtil.testSendCommand(TestProxyUtil.java:43)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-922) Remove extra semicolon from HDFS-877 that really annoys Eclipse

2010-01-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-922:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed this.

 Remove extra semicolon from HDFS-877 that really annoys Eclipse
 ---

 Key: HDFS-922
 URL: https://issues.apache.org/jira/browse/HDFS-922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Jakob Homan
Assignee: Jakob Homan
Priority: Minor
 Attachments: HDFS-922.patch


 HDFS-877 introduced an extra semicolon on an empty line that Eclipse treats 
 as a syntax error and hence messes up its compilation.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-905) Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)

2010-01-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-905:
-

Status: Open  (was: Patch Available)

 Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)
 

 Key: HDFS-905
 URL: https://issues.apache.org/jira/browse/HDFS-905
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Devaraj Das
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-905-mark3.patch, HDFS-905.patch, HDFS-905.patch


 This is about moving the HDFS code to use the new UserGroupInformation API as 
 described in HADOOP-6299.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-905) Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)

2010-01-26 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-905:
-

Attachment: HDFS-905-mark3.patch

Attaching final patch.  Passes all tests.  
Modified test-patch to use new common jar:
{noformat} [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 76 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.{noformat}

 Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)
 

 Key: HDFS-905
 URL: https://issues.apache.org/jira/browse/HDFS-905
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Devaraj Das
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-905-mark3.patch, HDFS-905.patch, HDFS-905.patch


 This is about moving the HDFS code to use the new UserGroupInformation API as 
 described in HADOOP-6299.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-921) Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito

2010-01-25 Thread Jakob Homan (JIRA)
Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito
---

 Key: HDFS-921
 URL: https://issues.apache.org/jira/browse/HDFS-921
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: Jakob Homan


When TestDFSClientRetries::testNotYetReplicatedErrors was written, Mockito was 
not available and the NameNode was mocked by manually extending ClientProtocol 
and implementing all the methods, most with empty bodies.  Now that we have 
Mockito, this code can be removed and replaced with an actual mock.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-921) Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito

2010-01-25 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-921:
-

Status: Patch Available  (was: Open)

submitting patch.

 Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito
 ---

 Key: HDFS-921
 URL: https://issues.apache.org/jira/browse/HDFS-921
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-921.patch


 When TestDFSClientRetries::testNotYetReplicatedErrors was written, Mockito 
 was not available and the NameNode was mocked by manually extending 
 ClientProtocol and implementing all the methods, most with empty bodies.  Now 
 that we have Mockito, this code can be removed and replaced with an actual 
 mock.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-921) Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito

2010-01-25 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-921:
-

Attachment: HDFS-921.patch

Attaching patch that removes the old, manually mocked Namenode and provides 
equivalent functionality via Mockito.

 Convert TestDFSClientRetries::testNotYetReplicatedErrors to Mockito
 ---

 Key: HDFS-921
 URL: https://issues.apache.org/jira/browse/HDFS-921
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-921.patch


 When TestDFSClientRetries::testNotYetReplicatedErrors was written, Mockito 
 was not available and the NameNode was mocked by manually extending 
 ClientProtocol and implementing all the methods, most with empty bodies.  Now 
 that we have Mockito, this code can be removed and replaced with an actual 
 mock.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-922) Remove extra semicolon from HDFS-877 that really annoys Eclipse

2010-01-25 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-922:
-

Attachment: HDFS-922.patch

Trivial patch.

 Remove extra semicolon from HDFS-877 that really annoys Eclipse
 ---

 Key: HDFS-922
 URL: https://issues.apache.org/jira/browse/HDFS-922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Priority: Minor
 Attachments: HDFS-922.patch


 HDFS-877 introduced an extra semicolon on an empty line that Eclipse treats 
 as a syntax error and hence messes up its compilation.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-922) Remove extra semicolon from HDFS-877 that really annoys Eclipse

2010-01-25 Thread Jakob Homan (JIRA)
Remove extra semicolon from HDFS-877 that really annoys Eclipse
---

 Key: HDFS-922
 URL: https://issues.apache.org/jira/browse/HDFS-922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Priority: Minor
 Attachments: HDFS-922.patch

HDFS-877 introduced an extra semicolon on an empty line that Eclipse treats as 
a syntax error and hence messes up its compilation.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-922) Remove extra semicolon from HDFS-877 that really annoys Eclipse

2010-01-25 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-922:
-

Status: Patch Available  (was: Open)

submit patch.

 Remove extra semicolon from HDFS-877 that really annoys Eclipse
 ---

 Key: HDFS-922
 URL: https://issues.apache.org/jira/browse/HDFS-922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Priority: Minor
 Attachments: HDFS-922.patch


 HDFS-877 introduced an extra semicolon on an empty line that Eclipse treats 
 as a syntax error and hence messes up its compilation.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-911) test-cactus fails with timeout error on trunk

2010-01-22 Thread Jakob Homan (JIRA)
test-cactus fails with timeout error on trunk
-

 Key: HDFS-911
 URL: https://issues.apache.org/jira/browse/HDFS-911
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Reporter: Jakob Homan


test-cactus target is failing on trunk:
{noformat}
test-cactus:
 [echo]  Free Ports: startup-17053 / http-17054 / https-17055
 [echo] Please take a deep breath while Cargo gets the Tomcat for running 
the servlet tests...
 [copy] Copying 1 file to 
/private/tmp/zok/hadoop-hdfs/build/contrib/hdfsproxy/target/tomcat-config/conf
   [cactus] -
   [cactus] Running tests against Tomcat 5.x @ http://localhost:17054
   [cactus] -
   [cactus] Deploying 
[/private/tmp/zok/hadoop-hdfs/build/contrib/hdfsproxy/target/test.war] to 
[/private/tmp/zok/hadoop-hdfs/build/contrib/hdfsproxy/target/tomcat-config/
webapps]...
   [cactus] Tomcat 5.x starting...
   [cactus] Tomcat 5.x started on port [17054]

BUILD FAILED
/private/tmp/zok/hadoop-hdfs/src/contrib/hdfsproxy/build.xml:292: Failed to 
start the container after more than [18] ms. Trying to connect to the 
[http://localhost:170
54/test/ServletRedirector?Cactus_Service=RUN_TEST] test URL yielded a [-1] 
error code. Please run in debug mode for more details about the error.{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-905) Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)

2010-01-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-905:
-

Attachment: HDFS-905.patch

Updated patch: addresses Owen's comments, adds datanode/namenode logging on 
during beginning, has raid removed, general cleanup.

 Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)
 

 Key: HDFS-905
 URL: https://issues.apache.org/jira/browse/HDFS-905
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Devaraj Das
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-905.patch, HDFS-905.patch


 This is about moving the HDFS code to use the new UserGroupInformation API as 
 described in HADOOP-6299.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-905) Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)

2010-01-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-905:
-

Status: Open  (was: Patch Available)

 Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)
 

 Key: HDFS-905
 URL: https://issues.apache.org/jira/browse/HDFS-905
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Devaraj Das
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-905.patch, HDFS-905.patch


 This is about moving the HDFS code to use the new UserGroupInformation API as 
 described in HADOOP-6299.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-905) Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)

2010-01-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-905:
-

Status: Patch Available  (was: Open)

submitting patch, but won't compile without updated common jar...

 Make changes to HDFS for the new UserGroupInformation APIs (HADOOP-6299)
 

 Key: HDFS-905
 URL: https://issues.apache.org/jira/browse/HDFS-905
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Devaraj Das
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-905.patch


 This is about moving the HDFS code to use the new UserGroupInformation API as 
 described in HADOOP-6299.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-906) Remove MapReduce jars from raid lib directory

2010-01-18 Thread Jakob Homan (JIRA)
Remove MapReduce jars from raid lib directory
-

 Key: HDFS-906
 URL: https://issues.apache.org/jira/browse/HDFS-906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/raid
Reporter: Jakob Homan


Currently copies of snapshots of the mapred and mapred-test jars are stored in 
the raid/lib directory, which creates a nasty circular dependency between the 
projects. These are needed for one unit test. Either the unit test should be 
refactored to not need mapreduce, or they should be pulled via ivy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-906) Remove MapReduce jars from raid lib directory

2010-01-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan resolved HDFS-906.
--

Resolution: Duplicate

Dhruba- Yep, different symptoms, but same cause.  Closing as duplicate.

 Remove MapReduce jars from raid lib directory
 -

 Key: HDFS-906
 URL: https://issues.apache.org/jira/browse/HDFS-906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/raid
Reporter: Jakob Homan

 Currently copies of snapshots of the mapred and mapred-test jars are stored 
 in the raid/lib directory, which creates a nasty circular dependency between 
 the projects. These are needed for one unit test. Either the unit test should 
 be refactored to not need mapreduce, or they should be pulled via ivy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-526) TestBackupNode is currently flaky and shouldn't be in commit test

2010-01-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12796422#action_12796422
 ] 

Jakob Homan commented on HDFS-526:
--

Yep.

 TestBackupNode is currently flaky and shouldn't be in commit test
 -

 Key: HDFS-526
 URL: https://issues.apache.org/jira/browse/HDFS-526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Jakob Homan

 As documented in HDFS-192 TestBackupNode is currently failing regularly and 
 is impacting our continuous integration tests.  Although it has good code 
 coverage value, perhaps it should be removed from the suite until its 
 reliability can be improved?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-526) TestBackupNode is currently flaky and shouldn't be in commit test

2010-01-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan resolved HDFS-526.
--

Resolution: Fixed

Closing since TestBackupNode is no longer flaky.  Thanks Konstantin.

 TestBackupNode is currently flaky and shouldn't be in commit test
 -

 Key: HDFS-526
 URL: https://issues.apache.org/jira/browse/HDFS-526
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Jakob Homan

 As documented in HDFS-192 TestBackupNode is currently failing regularly and 
 is impacting our continuous integration tests.  Although it has good code 
 coverage value, perhaps it should be removed from the suite until its 
 reliability can be improved?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-830) change build.xml to look at lib's jars before ivy, to allow overwriting ivy's libraries.

2009-12-16 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12791589#action_12791589
 ] 

Jakob Homan commented on HDFS-830:
--

I'm not wild about checking in jars just for testing, even if it is to your 
local mvn repository. That adds an extra step of cleaning them up.  Instead, 
how about if we have property for Ant such as -Dusejar= and provide a list of 
jars to be included. I can see this making it very easy to write a script to 
test up-the-line changes from common and hdfs to hdfs and mapreduce. This would 
also save the time of having to copy the jars into the lib directory: build the 
common jar, point hdfs to it when running tests with no extra checking in or 
copying muss or fuss.

 change build.xml to look at lib's jars before ivy, to allow overwriting ivy's 
 libraries.
 

 Key: HDFS-830
 URL: https://issues.apache.org/jira/browse/HDFS-830
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Boris Shkolnik
 Attachments: HDFS-830.patch


 Currently build.xml looks first into ivy's locations ,before picking up jars 
 from lib directory.
 We need to change that to allow overwriting ivy's libs with local ones, by 
 putting them into lib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-830) change build.xml to look at lib's jars before ivy, to allow overwriting ivy's libraries.

2009-12-16 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12791797#action_12791797
 ] 

Jakob Homan commented on HDFS-830:
--

It looks like the twiki has been changed to mandate the use of the internal 
maven repository approach even though we've not yet reached a consensus on the 
correction solution.  This is too bad; I still have concerns, as detailed above.

 change build.xml to look at lib's jars before ivy, to allow overwriting ivy's 
 libraries.
 

 Key: HDFS-830
 URL: https://issues.apache.org/jira/browse/HDFS-830
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Boris Shkolnik
 Attachments: HDFS-830.patch


 Currently build.xml looks first into ivy's locations ,before picking up jars 
 from lib directory.
 We need to change that to allow overwriting ivy's libs with local ones, by 
 putting them into lib.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-595) FsPermission tests need to be updated for new octal configuration parameter from HADOOP-6234

2009-12-14 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-595:
-

Attachment: HDFS-595-Y20.patch

Attaching patch for Y!'s 20 branch.

 FsPermission tests need to be updated for new octal configuration parameter 
 from HADOOP-6234
 

 Key: HDFS-595
 URL: https://issues.apache.org/jira/browse/HDFS-595
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.21.0

 Attachments: HDFS-595-Y20.patch, HDFS-595.patch, HDFS-595.patch


 HADOOP-6234 changed the format of the configuration umask value.  Tests that 
 use this value need to be updated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-823) In Checkpointer the getImage servlet is added to public rather than internal servlet list

2009-12-10 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12789009#action_12789009
 ] 

Jakob Homan commented on HDFS-823:
--

Looks like Hudson is falling down on the job again.  Manual test-patch:
{noformat}[exec] -1 overall.  
[exec] 
[exec] +1 @author.  The patch does not contain any @author tags.
[exec] 
[exec] -1 tests included.  The patch doesn't appear to include any new or 
modified tests.
[exec] Please justify why no new tests are needed for 
this patch.
[exec] Also please list what manual steps were 
performed to verify this patch.
[exec] 
[exec] +1 javadoc.  The javadoc tool did not generate any warning messages.
[exec] 
[exec] +1 javac.  The applied patch does not increase the total number of 
javac compiler warnings.
[exec] 
[exec] +1 findbugs.  The patch does not introduce any new Findbugs warnings.
[exec] 
[exec] +1 release audit.  The applied patch does not increase the total 
number of release audit warnings.{noformat}
No tests as explained above. Tests all pass locally except known-bad 
TestHDFSTrash.  Will commit to trunk and 21.


 In Checkpointer the getImage servlet is added to public rather than internal 
 servlet list
 -

 Key: HDFS-823
 URL: https://issues.apache.org/jira/browse/HDFS-823
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-823.patch


 Checkpointer.java:99
 {code}
 httpServer.addServlet(getimage, /getimage, 
 GetImageServlet.class);{code}
 This should be addInternalServlet, as it is for Namenode to ensure this 
 servlet does not get filtered.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-823) In Checkpointer the getImage servlet is added to public rather than internal servlet list

2009-12-10 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-823:
-

  Component/s: name-node
Affects Version/s: 0.22.0
   0.21.0
Fix Version/s: 0.22.0
   0.21.0

 In Checkpointer the getImage servlet is added to public rather than internal 
 servlet list
 -

 Key: HDFS-823
 URL: https://issues.apache.org/jira/browse/HDFS-823
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0, 0.22.0
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.21.0, 0.22.0

 Attachments: HDFS-823.patch


 Checkpointer.java:99
 {code}
 httpServer.addServlet(getimage, /getimage, 
 GetImageServlet.class);{code}
 This should be addInternalServlet, as it is for Namenode to ensure this 
 servlet does not get filtered.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-823) In Checkpointer the getImage servlet is added to public rather than internal servlet list

2009-12-09 Thread Jakob Homan (JIRA)
In Checkpointer the getImage servlet is added to public rather than internal 
servlet list
-

 Key: HDFS-823
 URL: https://issues.apache.org/jira/browse/HDFS-823
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan


Checkpointer.java:99
{code}
httpServer.addServlet(getimage, /getimage, GetImageServlet.class);{code}
This should be addInternalServlet, as it is for Namenode to ensure this servlet 
does not get filtered.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-823) In Checkpointer the getImage servlet is added to public rather than internal servlet list

2009-12-09 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-823:
-

Attachment: HDFS-823.patch

Trivial patch.  No tests as is using filtering, which we aren't using right now.

 In Checkpointer the getImage servlet is added to public rather than internal 
 servlet list
 -

 Key: HDFS-823
 URL: https://issues.apache.org/jira/browse/HDFS-823
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
 Attachments: HDFS-823.patch


 Checkpointer.java:99
 {code}
 httpServer.addServlet(getimage, /getimage, 
 GetImageServlet.class);{code}
 This should be addInternalServlet, as it is for Namenode to ensure this 
 servlet does not get filtered.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-823) In Checkpointer the getImage servlet is added to public rather than internal servlet list

2009-12-09 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-823:
-


submitting patch.

 In Checkpointer the getImage servlet is added to public rather than internal 
 servlet list
 -

 Key: HDFS-823
 URL: https://issues.apache.org/jira/browse/HDFS-823
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
 Attachments: HDFS-823.patch


 Checkpointer.java:99
 {code}
 httpServer.addServlet(getimage, /getimage, 
 GetImageServlet.class);{code}
 This should be addInternalServlet, as it is for Namenode to ensure this 
 servlet does not get filtered.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-606) ConcurrentModificationException in invalidateCorruptReplicas()

2009-12-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-606:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

This was committed to trunk Fri Sep 11 21:54:24 UTC 2009.  Resolving as fixed.

 ConcurrentModificationException in invalidateCorruptReplicas()
 --

 Key: HDFS-606
 URL: https://issues.apache.org/jira/browse/HDFS-606
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: CMEinCorruptReplicas.patch


 {{BlockManager.invalidateCorruptReplicas()}} iterates over 
 DatanodeDescriptor-s while removing corrupt replicas from the descriptors. 
 This causes {{ConcurrentModificationException}} if there is more than one 
 replicas of the block. I ran into this exception debugging different 
 scenarios in append, but it should be fixed in the trunk too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-781) Metrics PendingDeletionBlocks is not decremented

2009-12-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12786096#action_12786096
 ] 

Jakob Homan commented on HDFS-781:
--

+1 for both new patches.

 Metrics PendingDeletionBlocks is not decremented
 

 Key: HDFS-781
 URL: https://issues.apache.org/jira/browse/HDFS-781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Attachments: hdfs-781.1.patch, hdfs-781.2.patch, hdfs-781.3.patch, 
 hdfs-781.4.patch, hdfs-781.patch, hdfs-781.rel20.patch


 PendingDeletionBlocks is not decremented decremented when blocks pending 
 deletion in {{BlockManager.recentInvalidateSets}} are sent to datanode for 
 deletion. This results in invalid value in the metrics.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-781) Metrics PendingDeletionBlocks is not decremented

2009-12-03 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12785458#action_12785458
 ] 

Jakob Homan commented on HDFS-781:
--

Patch looks good except that, if we're (correctly) improving the test directory 
name, we should also fix its writing directly to /tmp rather than using the 
build property ({code}System.getProperty(test.build.data,/tmp){code}) 
(HADOOP-5916).


 Metrics PendingDeletionBlocks is not decremented
 

 Key: HDFS-781
 URL: https://issues.apache.org/jira/browse/HDFS-781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: hdfs-781.1.patch, hdfs-781.2.patch, hdfs-781.patch


 PendingDeletionBlocks is not decremented decremented when blocks pending 
 deletion in {{BlockManager.recentInvalidateSets}} are sent to datanode for 
 deletion. This results in invalid value in the metrics.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-85) FSEditLog.open should stop going on if cannot open any directory

2009-12-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-85?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-85:


Status: Open  (was: Patch Available)

Cancelling patch.  It appears that the patch has not passed tests yet (and 
probably is stale at this point).  Wang, please don't delete previous patches, 
as it makes it hard to track the progress of the patch.  Just number the 
patches as you submit them.  

 FSEditLog.open should stop going on if cannot open any directory
 

 Key: HDFS-85
 URL: https://issues.apache.org/jira/browse/HDFS-85
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: CentOS 5.2, jdk 1.6, hadoop 0.19.1
Reporter: Wang Xu
Assignee: Wang Xu
 Attachments: fseditlog-open.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 FSEditLog.open will be invoked when SecondaryNameNode doCheckPoint,
 If no dir is opened successfully, it only prints some WARN messages in log,
 and goes on running. 
 However, it causes the editStreams becomes empty and cannot by synced 
 in. And if editStreams were decreased to 0 when exceptions occured during
 logsync, NameNode would print FATAL log message and halt itself. Hence,
 we think it should also stopped itself at that time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-707) Remove unused method INodeFile.toINodeFileUnderConstruction()

2009-12-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-707:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

This code was implemented in HDFS-736, which has been committed.  Closing as 
(exp-post-facto) duplicate.

 Remove unused method INodeFile.toINodeFileUnderConstruction()
 -

 Key: HDFS-707
 URL: https://issues.apache.org/jira/browse/HDFS-707
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: notUsed.patch


 {{INodeFile.toINodeFileUnderConstruction()}} is currently not called anywhere 
 and therefore should be removed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-781) Metrics PendingDeletionBlocks is not decremented

2009-12-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-781:
-

Hadoop Flags: [Reviewed]

 Metrics PendingDeletionBlocks is not decremented
 

 Key: HDFS-781
 URL: https://issues.apache.org/jira/browse/HDFS-781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: hdfs-781.1.patch, hdfs-781.2.patch, hdfs-781.3.patch, 
 hdfs-781.patch


 PendingDeletionBlocks is not decremented decremented when blocks pending 
 deletion in {{BlockManager.recentInvalidateSets}} are sent to datanode for 
 deletion. This results in invalid value in the metrics.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-802) Update Eclipse configuration to match changes to Ivy configuration

2009-12-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12784984#action_12784984
 ] 

Jakob Homan commented on HDFS-802:
--

We need to have this be an automatic process.  This happens pretty much 
everytime somebody updates the libraries.  Is there anyway to pull the values 
from the ivy config file?

 Update Eclipse configuration to match changes to Ivy configuration
 --

 Key: HDFS-802
 URL: https://issues.apache.org/jira/browse/HDFS-802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.22.0
Reporter: Edwin Chan
 Attachments: hdfsClasspath.patch


 The .eclipse_templates/.classpath file doesn't match the Ivy configuration, 
 so I've updated it to match.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-794) many of 10 minutes verification tests are failing without an error code

2009-11-30 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12783884#action_12783884
 ] 

Jakob Homan commented on HDFS-794:
--

The ten-minute-tests are a subset of the larger, complete test set, and are run 
in ant via the same process.  So, it shouldn't just be the 10-min tests that 
are failing.  If they are, the tests should be failing in both runs.  Is this 
the case?

 many of 10 minutes verification tests are failing without an error code
 ---

 Key: HDFS-794
 URL: https://issues.apache.org/jira/browse/HDFS-794
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 0.21.0, 0.22.0
Reporter: Konstantin Boudnik
Priority: Blocker

 Many tests from 10 minutes verification are failing silently. Hudson reports 
 them as PASSED.
 Because of this problem patches aren't properly verified since at least 
 November 11th, 2009.
 This problem can be observed both in 0.21 and 0.22.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-785) Missing license header in java source files.

2009-11-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-785:
-

  Component/s: documentation
 Priority: Minor  (was: Major)
Fix Version/s: (was: 0.20.2)
   (was: 0.20.1)
   (was: 0.21.0)
 Hadoop Flags: [Reviewed]

+1. Patch is fine.

 Missing license header in java source files. 
 -

 Key: HDFS-785
 URL: https://issues.apache.org/jira/browse/HDFS-785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.20.1, 0.20.2, 0.21.0, 0.22.0
Reporter: Ravi Phulari
Assignee: Ravi Phulari
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-785.patch


 Following java source files are missing license header. 
 {noformat}
 src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestComputeInvalidateWork.java
 src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestHeartbeatHandling.java
 src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestNodeCount.java
 src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
 src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestUnderReplicatedBlocks.java
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-760) fs -put fails if dfs.umask is set to 63

2009-11-17 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12779270#action_12779270
 ] 

Jakob Homan commented on HDFS-760:
--

Some info: This is being caused by a problem in the configuration.  The stack 
trace: 
{noformat}
java.lang.IllegalArgumentException: 63
at 
org.apache.hadoop.fs.permission.PermissionParser.init(PermissionParser.java:54)
at 
org.apache.hadoop.fs.permission.UmaskParser.init(UmaskParser.java:37)
at 
org.apache.hadoop.fs.permission.FsPermission.getUMask(FsPermission.java:204)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:569)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:537)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:213)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:547)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:528)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:435)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:219)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:192)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:156)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1533)
at org.apache.hadoop.fs.FsShell.copyFromLocal(FsShell.java:129)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:1837)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:1974)
{noformat}
is misleading.
dfs.umask is the old-style key for setting the umask and is correctly read in 
by the Configuration as the old-style. However, the section of the code that 
determines whether or not to treat it as an old-style, decimal value (that 
should be converted to octal before being passed to the PermissionParser) is 
being given the wrong answer by the configuration:
{code}
if(conf != null) {
  String confUmask = conf.get(UMASK_LABEL);
  if(confUmask != null)  // UMASK_LABEL is set
if(conf.deprecatedKeyWasSet(DEPRECATED_UMASK_LABEL)) { // --- this is 
returning false but should be true
  umask = Integer.parseInt(confUmask); // Evaluate as decimal value
else
  umask = new UmaskParser(confUmask).getUMask();  // - therefore 
this tries to parse the decimal as padded octal and fails
{code}
The code in Configuration that checks whether or not a deprecated value was set 
is returning false, though it shouldn't be (specifically 
deprecatedKeyMap.get(oldKey).accessed is still set to false and should be 
true).  I'll look more tomorrow.

Regardless of the reason, we should probably have better exception handling in 
the FsShell.  The exception thrown from PermissionParser should be more 
descriptive so that when it hits the user there is a better sense of what went 
wrong.

 fs -put fails if dfs.umask is set to 63
 -

 Key: HDFS-760
 URL: https://issues.apache.org/jira/browse/HDFS-760
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.21.0
Reporter: Tsz Wo (Nicholas), SZE
Priority: Blocker
 Fix For: 0.21.0, 0.22.0


 Add the following to hdfs-site.conf
 {noformat}
   property
 namedfs.umask/name
 value63/value
   /property
 {noformat}
 Then run hadoop fs -put
 {noformat}
 -bash-3.1$ ./bin/hadoop fs -put README.txt r.txt
 09/11/09 23:09:07 WARN conf.Configuration: mapred.task.id is deprecated. 
 Instead, use mapreduce.task.attempt.id
 put: 63
 Usage: java FsShell [-put localsrc ... dst]
 -bash-3.1$
 {noformat}
 Observed the above behavior in 0.21.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-12 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12777241#action_12777241
 ] 

Jakob Homan commented on HDFS-669:
--

OK, latest patch looks fine once the script has been run (or the unit directory 
manually created).  +1

Of note, I tried adding a unit test into the commit-test.txt and running ant 
run-commit-test, but the unit test was not executed.  This should be fixed so 
that unit tests included in the file are run, since hopefully that will be the 
source of most of them.

Also, another JIRA should be opened to change the directory from hdfs to 
functional, so that we can start separating out the tests and not have a 
confusing directory name.

As part of test-patch it may be worth enforcing that all tests included in 
/unit don't run more than a few seconds.  This will provide a bit of a stick 
for enforcing that we end up with only real unit tests.

 Add unit tests 
 ---

 Key: HDFS-669
 URL: https://issues.apache.org/jira/browse/HDFS-669
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.21.0, 0.22.0
Reporter: Eli Collins
Assignee: Konstantin Boudnik
 Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
 HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS-669.sh, 
 HDFS669.patch


 Most HDFS tests are functional tests that test a feature end to end by 
 running a mini cluster. We should add more tests like TestReplication that 
 attempt to stress individual classes in isolation, ie by stubbing out 
 dependencies without running a mini cluster. This allows for more fine-grain 
 testing and making tests run much more quickly because they avoid the cost of 
 cluster setup and teardown. If it makes sense to use another framework 
 besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-12 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12777263#action_12777263
 ] 

Jakob Homan commented on HDFS-669:
--

I take back the Hudson comment.  Hudson will bomb without the new unit 
directory.  the current, stripped-down version of the patch doesn't include any 
Java, so Hudson won't be helpful.  +1 on going without it. 

 Add unit tests 
 ---

 Key: HDFS-669
 URL: https://issues.apache.org/jira/browse/HDFS-669
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.21.0, 0.22.0
Reporter: Eli Collins
Assignee: Konstantin Boudnik
 Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
 HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS-669.sh, 
 HDFS669.patch


 Most HDFS tests are functional tests that test a feature end to end by 
 running a mini cluster. We should add more tests like TestReplication that 
 attempt to stress individual classes in isolation, ie by stubbing out 
 dependencies without running a mini cluster. This allows for more fine-grain 
 testing and making tests run much more quickly because they avoid the cost of 
 cluster setup and teardown. If it makes sense to use another framework 
 besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12776681#action_12776681
 ] 

Jakob Homan commented on HDFS-669:
--

* I ran with the patch and using -Dtestcase=TestFSNamesystem and the test was 
executed twice.  
* The testDirNQuota() test concerns me as it is not readily apparent what role 
the spied-on instance is playing and thus may not be a good example of how to 
use Mockito.  At the very least, an explanation of how the spy is playing a 
role would be good.  This is comparison to the other test, where it is clear 
that the isInSameMode() call is intercepted and re-defined.

 Add unit tests 
 ---

 Key: HDFS-669
 URL: https://issues.apache.org/jira/browse/HDFS-669
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Eli Collins
Assignee: Konstantin Boudnik
 Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
 HDFS-669.patch, HDFS669.patch


 Most HDFS tests are functional tests that test a feature end to end by 
 running a mini cluster. We should add more tests like TestReplication that 
 attempt to stress individual classes in isolation, ie by stubbing out 
 dependencies without running a mini cluster. This allows for more fine-grain 
 testing and making tests run much more quickly because they avoid the cost of 
 cluster setup and teardown. If it makes sense to use another framework 
 besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12776820#action_12776820
 ] 

Jakob Homan commented on HDFS-669:
--

Cool, Cos.  I'll look at it first thing in the morning.

 Add unit tests 
 ---

 Key: HDFS-669
 URL: https://issues.apache.org/jira/browse/HDFS-669
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.21.0, 0.22.0
Reporter: Eli Collins
Assignee: Konstantin Boudnik
 Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
 HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS669.patch


 Most HDFS tests are functional tests that test a feature end to end by 
 running a mini cluster. We should add more tests like TestReplication that 
 attempt to stress individual classes in isolation, ie by stubbing out 
 dependencies without running a mini cluster. This allows for more fine-grain 
 testing and making tests run much more quickly because they avoid the cost of 
 cluster setup and teardown. If it makes sense to use another framework 
 besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-361) Convert FSImage.removedStorageDirs into a map.

2009-11-05 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-361:
-

Status: Open  (was: Patch Available)

Canceling patch to address Konstatin's comments.

 Convert FSImage.removedStorageDirs into a map.
 --

 Key: HDFS-361
 URL: https://issues.apache.org/jira/browse/HDFS-361
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Shvachko
Assignee: Boris Shkolnik
 Attachments: HADOOP-5619.patch


 {{FSImage.removedStorageDirs}} is declared as an {{ArrayList}}. In order to 
 avoid adding the same directory twice into {{removedStorageDirs}} we should 
 convert it into a map.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-712) Move libhdfs from mr to hdfs

2009-11-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-712:
-

Status: Open  (was: Patch Available)

Canceling patch for Eli to upload the new one without auto-generated content.

 Move libhdfs from mr to hdfs 
 -

 Key: HDFS-712
 URL: https://issues.apache.org/jira/browse/HDFS-712
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: addlibhdfs.patch


 Here's an hdfs jira for MAPREDUCE-665. During the project split libhdfs was 
 put in the mapreduce repo instead of hdfs, lets move it to hdfs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-747) Add JSure annotations to HDFS code

2009-11-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12772734#action_12772734
 ] 

Jakob Homan commented on HDFS-747:
--

It may be good to provide some background on what these annotations provide..

 Add JSure annotations to HDFS code
 --

 Key: HDFS-747
 URL: https://issues.apache.org/jira/browse/HDFS-747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Boudnik

 Some initial annotations were developed for a number of HDFS classes. They 
 need to be committed to the source code.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-740) rm and rmr fail to correctly move the user's files to the trash prior to deleting when they are over quota.

2009-10-28 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12771222#action_12771222
 ] 

Jakob Homan commented on HDFS-740:
--

It looks like Gary did his test slightly differently: in his test the Trash 
directory doesn't exist beforehand and there isn't quota available to even 
create it.  At which point we hit:
{code}for (int i = 0; i  2; i++) {
  try {
if (!fs.mkdirs(baseTrashPath, PERMISSION)) {  // create current
  LOG.warn(Can't create trash directory: +baseTrashPath);
  return false;
}
  } catch (IOException e) {
LOG.warn(Can't create trash directory: +baseTrashPath);
return false;
  }
{code}
This false gets percolated up to FsShell:delete which interprets the failure as 
instruction to go ahead with the hard delete:
{code}  if (trashTmp.moveToTrash(src)) {
System.out.println(Moved to trash:  + src);
return;
  }
}

if (srcFs.delete(src, true)) {{code}  It would probably be better to throw and 
exception rather than return false, so that the deletion doesn't go ahead.
This is not new behavior, it's been around since at least January (the farthest 
back into the repo I went).  Looks like it just hasn't been tested.

 rm and rmr fail to correctly move the user's files to the trash prior to 
 deleting when they are over quota.  
 -

 Key: HDFS-740
 URL: https://issues.apache.org/jira/browse/HDFS-740
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1
Reporter: gary murry

 With trash turned on, if a user is over his quota and does a rm (or rmr), the 
 file is deleted without a copy being placed in the trash.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-740) rm and rmr fail to correctly move the user's files to the trash prior to deleting when they are over quota.

2009-10-28 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12771252#action_12771252
 ] 

Jakob Homan commented on HDFS-740:
--

A quick instrumentation of code and Gary able to prove mkdir is throwing an 
exception, but it's an odd one:
{noformat}
an IOException from mkdirs: 
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
of /user/hadoopqa is exceeded: quota=1 diskspace consumed=19.6g
{noformat}
I didn't think that creating a directory would count against your DSQuota.  
Still poking around.
At the very least this code should be fixed no to silently catch the exception. 
 After Boris' fix (HADOOP-6203), the log message at least indicates there was 
an exception rather than a false return value, but still swallows the exception 
and doesn't log what it actually was.

 rm and rmr fail to correctly move the user's files to the trash prior to 
 deleting when they are over quota.  
 -

 Key: HDFS-740
 URL: https://issues.apache.org/jira/browse/HDFS-740
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1
Reporter: gary murry

 With trash turned on, if a user is over his quota and does a rm (or rmr), the 
 file is deleted without a copy being placed in the trash.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-726) Eclipse .classpath template has outdated jar files and is missing some new ones.

2009-10-22 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12768858#action_12768858
 ] 

Jakob Homan commented on HDFS-726:
--

In the pathc the .classpath file still includes src/test/hdfs-with-mr.  This 
should be removed.  Also, I always have to also compile the contrib module to 
get a happy eclipse regarding the classpath, but that should probably be fixed 
in a separate jira.

 Eclipse .classpath template has outdated jar files and is missing some new 
 ones.
 

 Key: HDFS-726
 URL: https://issues.apache.org/jira/browse/HDFS-726
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.0

 Attachments: HDFS-726.patch


 Eclipse environment is broken in trunk: it still uses *.21*.jar files and 
 includes some libraries which aren't in use any more.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-726) Eclipse .classpath template has outdated jar files and is missing some new ones.

2009-10-22 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12768875#action_12768875
 ] 

Jakob Homan commented on HDFS-726:
--

looks good. +1

 Eclipse .classpath template has outdated jar files and is missing some new 
 ones.
 

 Key: HDFS-726
 URL: https://issues.apache.org/jira/browse/HDFS-726
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.0

 Attachments: HDFS-726.patch, HDFS-726.patch


 Eclipse environment is broken in trunk: it still uses *.21*.jar files and 
 includes some libraries which aren't in use any more.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-726) Eclipse .classpath template has outdated jar files and is missing some new ones.

2009-10-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-726:
-

Hadoop Flags: [Reviewed]

 Eclipse .classpath template has outdated jar files and is missing some new 
 ones.
 

 Key: HDFS-726
 URL: https://issues.apache.org/jira/browse/HDFS-726
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.22.0

 Attachments: HDFS-726.patch, HDFS-726.patch


 Eclipse environment is broken in trunk: it still uses *.21*.jar files and 
 includes some libraries which aren't in use any more.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-606) ConcurrentModificationException in invalidateCorruptReplicas()

2009-09-09 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12753374#action_12753374
 ] 

Jakob Homan commented on HDFS-606:
--

+1. Looks good.

 ConcurrentModificationException in invalidateCorruptReplicas()
 --

 Key: HDFS-606
 URL: https://issues.apache.org/jira/browse/HDFS-606
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: CMEinCorruptReplicas.patch


 {{BlockManager.invalidateCorruptReplicas()}} iterates over 
 DatanodeDescriptor-s while removing corrupt replicas from the descriptors. 
 This causes {{ConcurrentModificationException}} if there is more than one 
 replicas of the block. I ran into this exception debugging different 
 scenarios in append, but it should be fixed in the trunk too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-595) FsPermission tests need to be updated for new octal configuration parameter from HADOOP-6234

2009-09-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-595:
-

Attachment: HDFS-595.patch

updated patch to remove unneeded import. Otherwise unchanged.

 FsPermission tests need to be updated for new octal configuration parameter 
 from HADOOP-6234
 

 Key: HDFS-595
 URL: https://issues.apache.org/jira/browse/HDFS-595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-595.patch, HDFS-595.patch


 HADOOP-6234 changed the format of the configuration umask value.  Tests that 
 use this value need to be updated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-235) Add support for byte-ranges to hftp

2009-09-04 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12751602#action_12751602
 ] 

Jakob Homan commented on HDFS-235:
--

Review patch:

* Class URLOpener may be better as a nested class within ByteRangeInputStream 
and needs JavaDoc
* ByteRangeInputStream::seekToNewSource still has an unresolved question as to 
return value. I would recommend throwing NotSupportedException since the 
behavior is non-deterministic and unreliable.
* Does HftpFileSystem::getNameNode(File)URL need to be public? It's better to 
make them package private until we have a need to support them as part of the 
API.
* Rather than casting the URISyntaxException in getNameNodeURL, you can wrap it 
an IOException
* There is quite a bit of commented out code in open. This needs to be removed.
* TestStreamFile::StrToRanges should start with a lower case s


 Add support for byte-ranges to hftp
 ---

 Key: HDFS-235
 URL: https://issues.apache.org/jira/browse/HDFS-235
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Venkatesh S
Assignee: Bill Zeller
 Fix For: 0.21.0

 Attachments: hdfs-235-1.patch, hdfs-235-2.patch


 Support should be similar to http byte-serving.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-578) Support for using server default values for blockSize and replication when creating a file

2009-09-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-578:
-

Hadoop Flags: [Incompatible change, Reviewed]

+1 Changed to incompatible change since it increments the client protocol 
version.

 Support for using server default values for blockSize and replication when 
 creating a file
 --

 Key: HDFS-578
 URL: https://issues.apache.org/jira/browse/HDFS-578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: h578-13.patch, h578-14.patch


 This is a sub-task of HADOOP-4952. This improvement makes it possible for a 
 client to specify that it wants to use the server default values for 
 blockSize and replication params when creating a file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-235) Add support for byte-ranges to hftp

2009-09-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-235:
-

Hadoop Flags: [Reviewed]

Thanks for the changes. Looks good. +1.

 Add support for byte-ranges to hftp
 ---

 Key: HDFS-235
 URL: https://issues.apache.org/jira/browse/HDFS-235
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Venkatesh S
Assignee: Bill Zeller
 Fix For: 0.21.0

 Attachments: hdfs-235-1.patch, hdfs-235-2.patch, hdfs-235-3.patch


 Support should be similar to http byte-serving.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-595) FsPermission tests need to be updated for new octal configuration parameter from HADOOP-6234

2009-09-03 Thread Jakob Homan (JIRA)
FsPermission tests need to be updated for new octal configuration parameter 
from HADOOP-6234


 Key: HDFS-595
 URL: https://issues.apache.org/jira/browse/HDFS-595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Assignee: Jakob Homan


HADOOP-6234 changed the format of the configuration umask value.  Tests that 
use this value need to be updated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-595) FsPermission tests need to be updated for new octal configuration parameter from HADOOP-6234

2009-09-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-595:
-

Attachment: HDFS-595.patch

Attaching patch to update tests that set umask.  Note that updated jars from 
HADOOP-6234 are needed.  As such, can't submit patch to Hudson.  Manually 
tested.

All unit tests pass.  

With updated jars for test-patch:
{noformat}
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 6 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
{noformat}

 FsPermission tests need to be updated for new octal configuration parameter 
 from HADOOP-6234
 

 Key: HDFS-595
 URL: https://issues.apache.org/jira/browse/HDFS-595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-595.patch


 HADOOP-6234 changed the format of the configuration umask value.  Tests that 
 use this value need to be updated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-568) TestServiceLevelAuthorization fails on latest build in Hudson

2009-08-27 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12748585#action_12748585
 ] 

Jakob Homan commented on HDFS-568:
--

I looked into this and the problem seems to emanating from 
CompletedJobStatusStore::readCounters(), which is returning a null value and 
crashing the test.  I don't believe it's related HDFS-538 and it's definitely 
not related to MAPREDUCE-874.  Since the mapred jars don't get updated with 
every commit back into hdfs, it's not immediately obvious what may have caused 
the regression. I've spoken with Arun about it offline and he's looking into a 
possible cause.

 TestServiceLevelAuthorization fails on latest build in Hudson
 -

 Key: HDFS-568
 URL: https://issues.apache.org/jira/browse/HDFS-568
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: gary murry
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.21.0


 The latest build in Hudson of Hadoop-hdfs fails 
 org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization.
   
 (http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Hdfs-trunk/61/testReport/)
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-568) TestServiceLevelAuthorization fails on latest build in Hudson

2009-08-26 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12748074#action_12748074
 ] 

Jakob Homan commented on HDFS-568:
--

I was seeing the same behavior.  

 TestServiceLevelAuthorization fails on latest build in Hudson
 -

 Key: HDFS-568
 URL: https://issues.apache.org/jira/browse/HDFS-568
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: gary murry
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.21.0


 The latest build in Hudson of Hadoop-hdfs fails 
 org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.testServiceLevelAuthorization.
   
 (http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Hdfs-trunk/61/testReport/)
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-538) DistributedFileSystem::listStatus incorrectly returns null for empty result sets

2009-08-21 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12746267#action_12746267
 ] 

Jakob Homan commented on HDFS-538:
--

Also, unit tests were run locally and all pass, once an updated MapReduce jar 
is provided for run-hdfs-tests-with-mapreduce.

 DistributedFileSystem::listStatus incorrectly returns null for empty result 
 sets
 

 Key: HDFS-538
 URL: https://issues.apache.org/jira/browse/HDFS-538
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-538.patch


 Currently the listStatus method returns null if no files match the request.  
 This differs from the Checksum/LocalFileSystem implementation, which returns 
 an empty array, and the nontvery-explict prescription of the FileSystem 
 interface: {...@return the statuses of the files/directories in the given 
 patch}}  It's better to return an empty collection than have to add extra 
 null checks.  The method should return an empty array.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-556) Provide info on failed volumes in the web ui

2009-08-20 Thread Jakob Homan (JIRA)
Provide info on failed volumes in the web ui


 Key: HDFS-556
 URL: https://issues.apache.org/jira/browse/HDFS-556
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Jakob Homan


HDFS-457 provided better handling of failed volumes but did not provide a 
corresponding view of this functionality on the web ui, such as a view of which 
datanodes have failed volumes.  This would be a good feature to have.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-513) NameNode.getBlockLocations throws NPE when offset filesize and file is not empty

2009-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-513:


Assignee: Todd Lipcon

 NameNode.getBlockLocations throws NPE when offset  filesize and file is not 
 empty
 --

 Key: HDFS-513
 URL: https://issues.apache.org/jira/browse/HDFS-513
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-513-trunk.txt, hdfs-513.txt


 in BlockManager.getBlockLocations, if the offset is past the end of a 
 non-empty file, it returns null. In FSNamesystem.getBlockLocationsInternal, 
 this null is passed through to inode.createLocatedBlocks, so it ends up with 
 a LocatedBlocks instance whose .blocks is null. This is then iterated over in 
 FSNamesystem.getBlockLocations, and throws an NPE.
 Instead, I think BlockManager.getBlockLocations should return 
 Collections.emptyList in the past-EOF case. This would result in an empty 
 list response from NN.getBlockLocations which matches the behavior of an 
 empty file. If this sounds like the appropriate fix Ill attach the patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



<    1   2   3   4   5   6   7   >