[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-10463:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Just committed this to trunk and branch-2. 

Thanks [~templedf] for fixing the test, and [~atm] for the review. 



> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302960#comment-15302960
 ] 

Karthik Kambatla commented on HDFS-10463:
-

+1. Checking this in. 

> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-24 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299165#comment-15299165
 ] 

Karthik Kambatla commented on HDFS-9782:


Oh, and thanks [~andrew.wang] and [~rkanter] for your reviews. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-24 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9782:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for the contribution, [~templedf]. Just committed this to trunk and 
branch-2. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-24 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299147#comment-15299147
 ] 

Karthik Kambatla commented on HDFS-9782:


Okay, will check this into branch-2 then. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-24 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298602#comment-15298602
 ] 

Karthik Kambatla commented on HDFS-9782:


Committed to trunk, but TestRollingFileSystemSinkWithHdfs#testFailedClose fails 
on branch-2. [~templedf] - can you look into the test failure? 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-24 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298504#comment-15298504
 ] 

Karthik Kambatla commented on HDFS-9782:


Thanks for the updates, [~templedf]. +1, checking this in. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-19 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9782:
---
Status: Open  (was: Patch Available)

The v8 patch looks good to me, but for the checkstyle nit - couple of lines are 
longer than 80 chars. 

+1, outside of that. Will commit this soon after that is fixed. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-17 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9782:
---
Status: Open  (was: Patch Available)

Canceling patch to address comments. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-17 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287586#comment-15287586
 ] 

Karthik Kambatla commented on HDFS-9782:


Sorry for the delay in getting to this. Looks mostly good. Some comments:
# Is the empty constructor so Reflection works?  
# Javadoc for stringifySecurityProperty, findCurrentDirectory, 
createOrAppendLogFile, doTestGetRollInterval are broken. Mind fixing them?
# Nit: Should checkForProperty be renamed to checkIfPropertyExists for more 
clarity? 
# RollingFileSystemSink#setInitialFlushTime is quite confusing to me. Can we 
clarify all the funkiness going on there? May be more comments? May be more 
meaningful variable name than millis? 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221989#comment-15221989
 ] 

Karthik Kambatla commented on HDFS-9782:


I see your point, but given the unlikeliness of someone differentiating between 
30 seconds and a 1 minute, I ll keep it simple and drop seconds altogether. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-04-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9782:
---
Status: Open  (was: Patch Available)

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221914#comment-15221914
 ] 

Karthik Kambatla commented on HDFS-9782:


Quickly skimmed through the patch. One major comment: if we want to support 
only minute granularity for this, why allow users to specify seconds? The 
offset milliseconds sounds okay, because the purpose is different. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-03-16 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9780:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch, HDFS-9780.005.patch, 
> HDFS-9780.006.patch, HDFS-9780.006.patch, HDFS-9780.007.patch, 
> HDFS-9780.008.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9858) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9858:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HDFS-9858
> URL: https://issues.apache.org/jira/browse/HDFS-9858
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HADOOP-12835.001.patch, HDFS-9858.002.patch
>
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9858) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168187#comment-15168187
 ] 

Karthik Kambatla commented on HDFS-9858:


Thanks for the fix, Daniel. 

Committed to trunk and branch-2. Had to resolve conflicts on branch-2 manually. 

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HDFS-9858
> URL: https://issues.apache.org/jira/browse/HDFS-9858
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12835.001.patch, HDFS-9858.002.patch
>
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9858) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168169#comment-15168169
 ] 

Karthik Kambatla commented on HDFS-9858:


+1. Checking this in. 

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HDFS-9858
> URL: https://issues.apache.org/jira/browse/HDFS-9858
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12835.001.patch, HDFS-9858.002.patch
>
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9637) Tests for RollingFileSystemSink

2016-02-11 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9637:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

+1 on the branch-2 patch, just committed. 

Thanks Daniel for the contribution. 

> Tests for RollingFileSystemSink
> ---
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 2.9.0
>
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch, HDFS-9637.005.patch, 
> HDFS-9637.006.patch, HDFS-9637.branch2.006.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-02-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143678#comment-15143678
 ] 

Karthik Kambatla commented on HDFS-9780:


+1 pending Jenkins. 

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch, HDFS-9780.005.patch, 
> HDFS-9780.006.patch, HDFS-9780.006.patch, HDFS-9780.007.patch, 
> HDFS-9780.008.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-02-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144019#comment-15144019
 ] 

Karthik Kambatla commented on HDFS-9780:


Checking this in. 

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch, HDFS-9780.005.patch, 
> HDFS-9780.006.patch, HDFS-9780.006.patch, HDFS-9780.007.patch, 
> HDFS-9780.008.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9637) Tests for RollingFileSystemSink

2016-02-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141384#comment-15141384
 ] 

Karthik Kambatla commented on HDFS-9637:


Committed this to trunk. 

On branch-2, one of the newly added tests fails:
{noformat}
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 39.088 sec <<< 
FAILURE! - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
  Time elapsed: 1.385 sec  <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: 
org.apache.hadoop.metrics2.MetricsException: Unable to close log file: 
hdfs://localhost:51275/tmp/2016021018/testsrc-saya.local.log
at 
org.apache.hadoop.metrics2.sink.RollingFileSystemSink.checkForErrors(RollingFileSystemSink.java:512)
at 
org.apache.hadoop.metrics2.sink.RollingFileSystemSink.close(RollingFileSystemSink.java:488)
at 
org.apache.hadoop.metrics2.sink.RollingFileSystemSinkTestBase$ErrorSink.close(RollingFileSystemSinkTestBase.java:498)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:245)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.stop(MetricsSinkAdapter.java:213)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSinks(MetricsSystemImpl.java:471)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:214)
at 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:183)
{noformat}

> Tests for RollingFileSystemSink
> ---
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch, HDFS-9637.005.patch, 
> HDFS-9637.006.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9637) Tests for RollingFileSystemSink

2016-02-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9637:
---
Summary: Tests for RollingFileSystemSink  (was: Add test for HADOOP-12702 
and HADOOP-12759)

> Tests for RollingFileSystemSink
> ---
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch, HDFS-9637.005.patch, 
> HDFS-9637.006.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9637) Tests for RollingFileSystemSink

2016-02-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15141364#comment-15141364
 ] 

Karthik Kambatla commented on HDFS-9637:


+1, checking the latest patch in. 

> Tests for RollingFileSystemSink
> ---
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch, HDFS-9637.005.patch, 
> HDFS-9637.006.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-02-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142053#comment-15142053
 ] 

Karthik Kambatla commented on HDFS-9780:


Patch is looking good, and thanks for updating the tests to not use the secure 
config by default. 

That said, the check in setup method to run it only for tests without secure in 
their name seems a little brittle - it is easy to miss and add a secure test 
without saying Secure. May be we could move them to another class that inherits 
from the same base? 

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch, HDFS-9780.005.patch, 
> HDFS-9780.006.patch, HDFS-9780.006.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9637) Add test for HADOOP-12702 and HADOOP-12759

2016-02-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140132#comment-15140132
 ] 

Karthik Kambatla commented on HDFS-9637:


Reviewing the tests here, following up from the feature added in Common. 

The patch looks pretty good. Nice tests and very happy to see the detailed 
documentation. My only comment would be: use @Before and @After methods to do 
the setup and cleanup, that way we don't have to do try-finally blocks. 

> Add test for HADOOP-12702 and HADOOP-12759
> --
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-02-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9780:
---
Status: Open  (was: Patch Available)

Canceling patch until HDFS-9637 gets committed. 

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9780) RollingFileSystemSink doesn't work on secure clusters

2016-02-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140277#comment-15140277
 ] 

Karthik Kambatla commented on HDFS-9780:


Quickly skimmed through the patch. 

Do we want to checkForProperty even if security is not enabled? 
{code}
// Validate config so that we don't get an NPE
checkForProperty(conf, KEYTAB_PROPERTY_KEY);
checkForProperty(conf, USERNAME_PROPERTY_KEY);
{code}

Once the patch is updated based on HDFS-9637, will take a closer look. 

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HDFS-9780
> URL: https://issues.apache.org/jira/browse/HDFS-9780
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch, HDFS-9780.004.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9637) Add test for HADOOP-12702 and HADOOP-12759

2016-02-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140252#comment-15140252
 ] 

Karthik Kambatla commented on HDFS-9637:


Looks good. +1. Will check this in tomorrow morning if I don't hear any 
objections. 

> Add test for HADOOP-12702 and HADOOP-12759
> --
>
> Key: HDFS-9637
> URL: https://issues.apache.org/jira/browse/HDFS-9637
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9637.001.patch, HDFS-9637.002.patch, 
> HDFS-9637.003.patch, HDFS-9637.004.patch, HDFS-9637.005.patch
>
>
> Per discussion on the dev list, the tests for the new FileSystemSink class 
> should be added to the HDFS project to avoid creating a dependency for the 
> common project on the HDFS project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14994720#comment-14994720
 ] 

Karthik Kambatla commented on HDFS-2261:


+1, pending Jenkins. Thanks for taking this up, [~wheat9]. 

> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Priority: Minor
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-02 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986391#comment-14986391
 ] 

Karthik Kambatla commented on HDFS-2261:


Just circling back. Has been two years since the last discussion.

Do we still think we should preserve these tests in trunk and branch-2? 

> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Priority: Minor
> Attachments: hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7005) DFS input streams do not timeout

2015-04-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505739#comment-14505739
 ] 

Karthik Kambatla commented on HDFS-7005:


Thanks for the ping, [~cnauroth]. 

[~ndimiduk] - there are no active plans for 2.5.3. If HDFS committers think 
this issue is serious enough to warrant a point release, I don't mind creating 
the RC and putting it through a vote. 

 DFS input streams do not timeout
 

 Key: HDFS-7005
 URL: https://issues.apache.org/jira/browse/HDFS-7005
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 2.5.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 2.6.0

 Attachments: HDFS-7005.patch


 Input streams lost their timeout.  The problem appears to be 
 {{DFSClient#newConnectedPeer}} does not set the read timeout.  During a 
 temporary network interruption the server will close the socket, unbeknownst 
 to the client host, which blocks on a read forever.
 The results are dire.  Services such as the RM, JHS, NMs, oozie servers, etc 
 all need to be restarted to recover - unless you want to wait many hours for 
 the tcp stack keepalive to detect the broken socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-03-13 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361536#comment-14361536
 ] 

Karthik Kambatla commented on HDFS-7858:


If possible, it would be nice to make the solution here accessible to YARN as 
well. 

Simultaneously connecting to all the masters (NNs in HDFS and RMs in YARN) 
might work most of the time. How do we plan to handle a split-brain? In YARN, 
we don't use an explicit fencing mechanism. IIRR, one is not required to 
configure a fencing mechanism when using QJM? 


 Improve HA Namenode Failover detection on the client
 

 Key: HDFS-7858
 URL: https://issues.apache.org/jira/browse/HDFS-7858
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh
 Attachments: HDFS-7858.1.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, 
 HDFS-7858.3.patch


 In an HA deployment, Clients are configured with the hostnames of both the 
 Active and Standby Namenodes.Clients will first try one of the NNs 
 (non-deterministically) and if its a standby NN, then it will respond to the 
 client to retry the request on the other Namenode.
 If the client happens to talks to the Standby first, and the standby is 
 undergoing some GC / is busy, then those clients might not get a response 
 soon enough to try the other NN.
 Proposed Approach to solve this :
 1) Since Zookeeper is already used as the failover controller, the clients 
 could talk to ZK and find out which is the active namenode before contacting 
 it.
 2) Long-lived DFSClients would have a ZK watch configured which fires when 
 there is a failover so they do not have to query ZK everytime to find out the 
 active NN
 2) Clients can also cache the last active NN in the user's home directory 
 (~/.lastNN) so that short-lived clients can try that Namenode first before 
 querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7391) Renable SSLv2Hello in HttpFS

2014-11-13 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-7391:
---
Affects Version/s: (was: 2.5.2)
   (was: 2.6.0)
   2.5.1
Fix Version/s: 2.5.2

Thanks Robert, Wei and Arun for your contributions, review and commit.

Just pulled this into branch-2.5 and branch-2.5.2 as well.

 Renable SSLv2Hello in HttpFS
 

 Key: HDFS-7391
 URL: https://issues.apache.org/jira/browse/HDFS-7391
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.5.1
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Fix For: 2.6.0, 2.5.2

 Attachments: HDFS-7391-branch-2.5.patch, HDFS-7391.patch


 We should re-enable SSLv2Hello, which is required for older clients (e.g. 
 Java 6 with openssl 0.9.8x) so they can't connect without it. Just to be 
 clear, it does not mean SSLv2, which is insecure.
 I couldn't simply do an addendum patch on HDFS-7274 because it's already been 
 closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7391) Renable SSLv2Hello in HttpFS

2014-11-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14209143#comment-14209143
 ] 

Karthik Kambatla commented on HDFS-7391:


[~acmurthy] - Missed your comment here, and committed the addendum for 
HADOOP-11217. Will let you commit this, thanks. 

 Renable SSLv2Hello in HttpFS
 

 Key: HDFS-7391
 URL: https://issues.apache.org/jira/browse/HDFS-7391
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0, 2.5.2
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HDFS-7391-branch-2.5.patch, HDFS-7391.patch


 We should re-enable SSLv2Hello, which is required for older clients (e.g. 
 Java 6 with openssl 0.9.8x) so they can't connect without it. Just to be 
 clear, it does not mean SSLv2, which is insecure.
 I couldn't simply do an addendum patch on HDFS-7274 because it's already been 
 closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7274) Disable SSLv3 in HttpFS

2014-11-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-7274:
---
Target Version/s:   (was: 2.6.0)
   Fix Version/s: (was: 2.6.0)
  2.5.2

Included this in 2.5.2 as well. 

 Disable SSLv3 in HttpFS
 ---

 Key: HDFS-7274
 URL: https://issues.apache.org/jira/browse/HDFS-7274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Fix For: 2.5.2

 Attachments: HDFS-7274.patch, HDFS-7274.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7274) Disable SSLv3 (POODLEbleed vulnerability) in HttpFS

2014-10-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187650#comment-14187650
 ] 

Karthik Kambatla commented on HDFS-7274:


This too looks good to me. 

 Disable SSLv3 (POODLEbleed vulnerability) in HttpFS
 ---

 Key: HDFS-7274
 URL: https://issues.apache.org/jira/browse/HDFS-7274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HDFS-7274.patch, HDFS-7274.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7274) Disable SSLv3 in HttpFS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-7274:
---
Summary: Disable SSLv3 in HttpFS  (was: Disable SSLv3 (POODLEbleed 
vulnerability) in HttpFS)

 Disable SSLv3 in HttpFS
 ---

 Key: HDFS-7274
 URL: https://issues.apache.org/jira/browse/HDFS-7274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HDFS-7274.patch, HDFS-7274.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7274) Disable SSLv3 in HttpFS

2014-10-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187773#comment-14187773
 ] 

Karthik Kambatla commented on HDFS-7274:


Thanks ATM, committing this as well.. 

 Disable SSLv3 in HttpFS
 ---

 Key: HDFS-7274
 URL: https://issues.apache.org/jira/browse/HDFS-7274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HDFS-7274.patch, HDFS-7274.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7274) Disable SSLv3 in HttpFS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-7274:
---
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Robert for the patch, and ATM for the review. Just committed this to 
trunk, branch-2 and branch-2.6.

 Disable SSLv3 in HttpFS
 ---

 Key: HDFS-7274
 URL: https://issues.apache.org/jira/browse/HDFS-7274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HDFS-7274.patch, HDFS-7274.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7275) Add TLSv1.1,TLSv1.2 to HttpFS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HDFS-7275.

Resolution: Duplicate

 Add TLSv1.1,TLSv1.2 to HttpFS
 -

 Key: HDFS-7275
 URL: https://issues.apache.org/jira/browse/HDFS-7275
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Robert Kanter

 HDFS-7274 required us to specifically list the versions of TLS that HttpFS 
 supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
 TLSv1.1 and TLSv1.2, we should add them to the list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7172) Test data files may be checked out of git with incorrect line endings, causing test failures in TestHDFSCLI.

2014-10-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155675#comment-14155675
 ] 

Karthik Kambatla commented on HDFS-7172:


Can we add a rat exclude to the newly added .gitattributes file? 

 Test data files may be checked out of git with incorrect line endings, 
 causing test failures in TestHDFSCLI.
 

 Key: HDFS-7172
 URL: https://issues.apache.org/jira/browse/HDFS-7172
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial
 Fix For: 2.6.0

 Attachments: HDFS-7172.1.patch


 {{TestHDFSCLI}} uses several files at src/test/resource/data* as test input 
 files.  Some of the tests expect a specific length for these files.  If they 
 get checked out of git with CRLF line endings by mistake, then the test 
 assertions will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7172) Test data files may be checked out of git with incorrect line endings, causing test failures in TestHDFSCLI.

2014-10-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14155746#comment-14155746
 ] 

Karthik Kambatla commented on HDFS-7172:


Yep, the addendum looks good to me. 

 Test data files may be checked out of git with incorrect line endings, 
 causing test failures in TestHDFSCLI.
 

 Key: HDFS-7172
 URL: https://issues.apache.org/jira/browse/HDFS-7172
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial
 Fix For: 2.6.0

 Attachments: HDFS-7172.1.patch, HDFS-7172.rat.patch


 {{TestHDFSCLI}} uses several files at src/test/resource/data* as test input 
 files.  Some of the tests expect a specific length for these files.  If they 
 get checked out of git with CRLF line endings by mistake, then the test 
 assertions will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6983) TestBalancer#testExitZeroOnSuccess fails intermittently

2014-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6983:
---
Target Version/s: 2.6.0  (was: 2.5.1)

Targeting this to 2.6, since this is only a test failure. 

 TestBalancer#testExitZeroOnSuccess fails intermittently
 ---

 Key: HDFS-6983
 URL: https://issues.apache.org/jira/browse/HDFS-6983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Mit Desai

 TestBalancer#testExitZeroOnSuccess fails intermittently on branch-2. And 
 probably fails on trunk too.
 The test fails 1 in 20 times when I ran it in a loop. Here is the how it 
 fails.
 {noformat}
 org.apache.hadoop.hdfs.server.balancer.TestBalancer
 testExitZeroOnSuccess(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
 Time elapsed: 53.965 sec   ERROR!
 java.util.concurrent.TimeoutException: Rebalancing expected avg utilization 
 to become 0.2, but on datanode 127.0.0.1:35502 it remains at 0.08 after more 
 than 4 msec.
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForBalancer(TestBalancer.java:321)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancerCli(TestBalancer.java:632)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:549)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.testExitZeroOnSuccess(TestBalancer.java:845)
 Results :
 Tests in error: 
   
 TestBalancer.testExitZeroOnSuccess:845-oneNodeTest:645-doTest:437-doTest:549-runBalancerCli:632-waitForBalancer:321
  Timeout
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6914) Resolve huge memory consumption Issue with OIV processing PB-based fsimages

2014-09-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6914:
---
Target Version/s: 2.6.0  (was: 2.5.1)
   Fix Version/s: (was: 2.5.1)

Targeting 2.6.0. Also, the fixVersion is to be set at commit time by the person 
committing.

 Resolve huge memory consumption Issue with OIV processing PB-based fsimages
 ---

 Key: HDFS-6914
 URL: https://issues.apache.org/jira/browse/HDFS-6914
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Hao Chen
  Labels: hdfs
 Attachments: HDFS-6914.patch


 For better managing and supporting a lot of large hadoop clusters in 
 production, we internally need to automatically export fsimage to delimited 
 text files in LSR style and then analyse with hive or pig or build system 
 metrics for real-time analyzing. 
 However  due to the internal layout changes introduced by the protobuf-based 
 fsimage, OIV processing program consumes excessive amount of memory. For 
 example, in order to export the fsimage in size of 8GB, it should have taken 
 about 85GB memory which is really not reasonable and impacted performance of 
 other services badly in the same server.
 To resolve above problem, I submit this patch which will reduce memory 
 consumption of OIV LSR processing by 50%.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6914) Resolve huge memory consumption Issue with OIV processing PB-based fsimages

2014-08-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14113415#comment-14113415
 ] 

Karthik Kambatla commented on HDFS-6914:


Do we really want to target this to 2.5.1? If not, I would suggest targetting 
2.6. 

 Resolve huge memory consumption Issue with OIV processing PB-based fsimages
 ---

 Key: HDFS-6914
 URL: https://issues.apache.org/jira/browse/HDFS-6914
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Hao Chen
  Labels: hdfs
 Fix For: 2.5.1

 Attachments: HDFS-6914.patch


 For better managing and supporting a lot of large hadoop clusters in 
 production, we internally need to automatically export fsimage to delimited 
 text files in LSR style and then analyse with hive or pig or build system 
 metrics for real-time analyzing. 
 However  due to the internal layout changes introduced by the protobuf-based 
 fsimage, OIV processing program consumes excessive amount of memory. For 
 example, in order to export the fsimage in size of 8GB, it should have taken 
 about 85GB memory which is really not reasonable and impacted performance of 
 other services badly in the same server.
 To resolve above problem, I submit this patch which will reduce memory 
 consumption of OIV LSR processing by 50%.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-3744) Decommissioned nodes are included in cluster after switch which is not expected

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-3744:
---

Fix Version/s: (was: 2.5.0)

 Decommissioned nodes are included in cluster after switch which is not 
 expected
 ---

 Key: HDFS-3744
 URL: https://issues.apache.org/jira/browse/HDFS-3744
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.0.0-alpha, 3.0.0, 2.0.1-alpha, 2.1.0-beta
Reporter: Brahma Reddy Battula
Assignee: Vinayakumar B
 Attachments: HDFS-3744.patch


 Scenario:
 =
 Start ANN and SNN with three DN's
 Exclude DN1 from cluster by using decommission feature 
 (./hdfs dfsadmin -fs hdfs://ANNIP:8020 -refreshNodes)
 After decommission successful,do switch such that SNN will become Active.
 Here exclude node(DN1) is included in cluster.Able to write files to excluded 
 node since it's not excluded.
 Checked SNN(Which Active before switch) UI decommissioned=1 and ANN UI 
 decommissioned=0
 One more Observation:
 
 All dfsadmin commands will create proxy only on nn1 irrespective of Active or 
 standby.I think this also we need to re-look once..
 I am not getting , why we are not given HA for dfsadmin commands..?
 Please correct me,,If I am wrong.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5503) Datanode#checkDiskError also should check for ClosedChannelException

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5503:
---

Fix Version/s: (was: 2.5.0)

 Datanode#checkDiskError also should check for ClosedChannelException
 

 Key: HDFS-5503
 URL: https://issues.apache.org/jira/browse/HDFS-5503
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5503.patch


 out file
 ==
 {noformat}
 Exception in thread PacketResponder: 
 BP-52063768-x-1383447451718:blk_1073755206_1099511661730, 
 type=LAST_IN_PIPELINE, downstreams=0:[] java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:1363)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1233)
 at java.lang.Thread.run(Thread.java:662){noformat}
 log file
 ===
 {noformat}2013-11-08 21:23:36,622 WARN 
 org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in 
 BlockReceiver.run():
 java.nio.channels.ClosedChannelException
 at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
 at 
 org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
 at 
 org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
 at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
 at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
 at java.io.DataOutputStream.flush(DataOutputStream.java:106)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1212)
 at java.lang.Thread.run(Thread.java:662)
 2013-11-08 21:23:36,622 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 checkDiskError: exception:
 java.nio.channels.ClosedChannelException
 at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
 at 
 org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at 
 org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
 at 
 org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
 at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
 at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
 at java.io.DataOutputStream.flush(DataOutputStream.java:106)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1212)
 at java.lang.Thread.run(Thread.java:662){noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-3907) Allow multiple users for local block readers

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-3907:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Allow multiple users for local block readers
 

 Key: HDFS-3907
 URL: https://issues.apache.org/jira/browse/HDFS-3907
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.6.0

 Attachments: hdfs-3907.txt


 The {{dfs.block.local-path-access.user}} config added in HDFS-2246 only 
 supports a single user, however as long as blocks are group readable by more 
 than one user the feature could be used by multiple users, to support this we 
 just need to allow both to be configured. In practice this allows us to also 
 support HBase where the client (RS) runs as the hbase system user and the DN 
 runs as hdfs system user. I think this should work secure as well since we're 
 not using impersonation in the HBase case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5098) Enhance FileSystem.Statistics to have locality information

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5098:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Enhance FileSystem.Statistics to have locality information
 --

 Key: HDFS-5098
 URL: https://issues.apache.org/jira/browse/HDFS-5098
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Suresh Srinivas
 Fix For: 2.6.0


 Currently in MR/Tez we dont have a good and accurate means to detect how much 
 the the IO was actually done locally. Getting this information from the 
 source of truth would be much better.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-3592) libhdfs should expose ClientProtocol::mkdirs2

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-3592:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 libhdfs should expose ClientProtocol::mkdirs2
 -

 Key: HDFS-3592
 URL: https://issues.apache.org/jira/browse/HDFS-3592
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.6.0


 It would be nice if libhdfs exposed mkdirs2.  This version of mkdirs is much 
 more verbose about any errors that occur-- it throws AccessControlException, 
 FileAlreadyExists, FileNotFoundException, ParentNotDirectoryException, etc.
 The original mkdirs just throws IOException if anything goes wrong.
 For something like fuse_dfs, it is very important to return the correct errno 
 code when an error has occurred.  mkdirs2 would allow us to do that.
 I'm not sure if we should just change hdfsMkdirs to use mkdirs2, or add an 
 hdfsMkdirs2.  Probably the latter, but the former course would maintain bug 
 compatibility with ancient releases-- if that is important.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5687) Problem in accessing NN JSP page

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5687:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Problem in accessing NN JSP page
 

 Key: HDFS-5687
 URL: https://issues.apache.org/jira/browse/HDFS-5687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: sathish
Assignee: sathish
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-5687-0001.patch


 In NN UI page After clicking the browse File System page,from that page,if 
 you click GO Back TO DFS HOME ICon it is not accessing the dfshealth.jsp page
 NN http URL is http://nnaddr///nninfoaddr/dfshealth.jsp,it is coming like 
 this,due to this i think it is not browsing that page
 It should be http://nninfoaddr/dfshealth.jsp/ like this



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4066) Fix compile warnings for libwebhdfs

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-4066:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Fix compile warnings for libwebhdfs
 ---

 Key: HDFS-4066
 URL: https://issues.apache.org/jira/browse/HDFS-4066
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.6.0


 The compile of libwebhdfs still generates a bunch of warnings, which need to 
 be fixed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5160) MiniDFSCluster webui does not work

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5160:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 MiniDFSCluster webui does not work
 --

 Key: HDFS-5160
 URL: https://issues.apache.org/jira/browse/HDFS-5160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.1.0-beta
Reporter: Alejandro Abdelnur
 Fix For: 2.6.0


 The webui does not work, when going to http://localhost:50070 you get:
 {code}
 Directory: /
 webapps/  102 bytes   Sep 4, 2013 9:32:55 AM
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6300) Allows to run multiple balancer simultaneously

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6300:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Allows to run multiple balancer simultaneously
 --

 Key: HDFS-6300
 URL: https://issues.apache.org/jira/browse/HDFS-6300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.6.0

 Attachments: HDFS-6300.patch


 Javadoc of Balancer.java says, it will not allow to run second balancer if 
 the first one is in progress. But I've noticed multiple can run together and 
 balancer.id implementation is not safe guarding.
 {code}
  * liAnother balancer is running. Exiting...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-3761) Uses of AuthenticatedURL should set the configured timeouts

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-3761:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 Uses of AuthenticatedURL should set the configured timeouts
 ---

 Key: HDFS-3761
 URL: https://issues.apache.org/jira/browse/HDFS-3761
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.6.0


 Similar to {{URLUtils}}, the {{AuthenticatedURL}} should set the configured 
 timeouts in URL connection instances. HADOOP-8644 is introducing a 
 {{ConnectionConfigurator}} interface to the {{AuthenticatedURL}} class, it 
 should be done via that interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)



[jira] [Updated] (HDFS-3988) HttpFS can't do GETDELEGATIONTOKEN without a prior authenticated request

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-3988:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 HttpFS can't do GETDELEGATIONTOKEN without a prior authenticated request
 

 Key: HDFS-3988
 URL: https://issues.apache.org/jira/browse/HDFS-3988
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0


 A request to obtain a delegation token cannot it initiate an authentication 
 sequence, it must be accompanied by an auth cookie obtained in a prev request 
 using a different operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4198) the shell script error for Cygwin on windows7

2014-08-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-4198:
---

Fix Version/s: (was: 2.5.0)
   2.6.0

 the shell script error for Cygwin on windows7
 -

 Key: HDFS-4198
 URL: https://issues.apache.org/jira/browse/HDFS-4198
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.2-alpha
 Environment: windows7 ,cygwin.
Reporter: Han Hui Wen 
 Fix For: 2.6.0


 See the following 
 [comment|https://issues.apache.org/jira/browse/HDFS-4198?focusedCommentId=13498818page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13498818]
  for detailed description.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6717) Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config

2014-08-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087900#comment-14087900
 ] 

Karthik Kambatla commented on HDFS-6717:


No problem, [~brandonli]. The addendum caught me between RCs. I just reverted 
the original documentation fix as well from branch-2 and branch-2.5, so that 
the entire fix goes in one release - 2.6.0. I hope this is okay, given it is a 
doc improvement. 

 Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config
 ---

 Key: HDFS-6717
 URL: https://issues.apache.org/jira/browse/HDFS-6717
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 2.4.0
Reporter: Jeff Hansen
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-6717.001.patch, HDFS-6717.more-change.patch, 
 HDFS-6717.more-change2.patch, HDFS-6717.more-change3.patch, 
 HdfsNfsGateway.html


 I believe this is just a matter of needing to update documentation. As a 
 result of https://issues.apache.org/jira/browse/HDFS-5804, the secure and 
 unsecure code paths appear to have been merged -- this is great because it 
 means less code to test. However, it means that the default unsecure behavior 
 requires additional configuration that needs to be documented. 
 I'm not the first to have trouble following the instructions documented in 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
 I kept hitting a RemoteException with the message that hdfs user cannot 
 impersonate root -- apparently under the old code, there was no impersonation 
 going on, so the nfs3 service could and should be run under the same user id 
 that runs hadoop (I assumed this meant the user id hdfs). However, with the 
 new unified code path, that would require hdfs to be able to impersonate root 
 (because root is always the local user that mounts a drive). The comments in 
 jira hdfs-5804 seem to indicate nobody has a problem with requiring the 
 nfsserver user to impersonate root -- if that means it's necessary for the 
 configuration to include root as a user nfsserver can impersonate, that 
 should be included in the setup instructions.
 More to the point, it appears to be absolutely necessary now to provision a 
 user named nfsserver in order to be able to give that nfsserver ability to 
 impersonate other users. Alternatively I think we'd need to configure hdfs to 
 be able to proxy other users. I'm not really sure what the best practice 
 should be, but it should be documented since it wasn't needed in the past.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HDFS-6717) Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config

2014-08-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reopened HDFS-6717:



Looks like this got mistakenly committed to branch-2.5, given this is a Minor 
issue. 

I am reverting this from branch-2.5 and targeting 2.6.0

 Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config
 ---

 Key: HDFS-6717
 URL: https://issues.apache.org/jira/browse/HDFS-6717
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 2.4.0
Reporter: Jeff Hansen
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.5.0

 Attachments: HDFS-6717.001.patch, HDFS-6717.more-change.patch, 
 HDFS-6717.more-change2.patch, HDFS-6717.more-change3.patch, 
 HdfsNfsGateway.html


 I believe this is just a matter of needing to update documentation. As a 
 result of https://issues.apache.org/jira/browse/HDFS-5804, the secure and 
 unsecure code paths appear to have been merged -- this is great because it 
 means less code to test. However, it means that the default unsecure behavior 
 requires additional configuration that needs to be documented. 
 I'm not the first to have trouble following the instructions documented in 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
 I kept hitting a RemoteException with the message that hdfs user cannot 
 impersonate root -- apparently under the old code, there was no impersonation 
 going on, so the nfs3 service could and should be run under the same user id 
 that runs hadoop (I assumed this meant the user id hdfs). However, with the 
 new unified code path, that would require hdfs to be able to impersonate root 
 (because root is always the local user that mounts a drive). The comments in 
 jira hdfs-5804 seem to indicate nobody has a problem with requiring the 
 nfsserver user to impersonate root -- if that means it's necessary for the 
 configuration to include root as a user nfsserver can impersonate, that 
 should be included in the setup instructions.
 More to the point, it appears to be absolutely necessary now to provision a 
 user named nfsserver in order to be able to give that nfsserver ability to 
 impersonate other users. Alternatively I think we'd need to configure hdfs to 
 be able to proxy other users. I'm not really sure what the best practice 
 should be, but it should be documented since it wasn't needed in the past.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6717) Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config

2014-08-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HDFS-6717.


   Resolution: Fixed
Fix Version/s: (was: 2.5.0)
   2.6.0

Reverted from branch-2.5 and branch-2.5.0 and updated CHANGES.txt accordingly. 

 Jira HDFS-5804 breaks default nfs-gateway behavior for unsecured config
 ---

 Key: HDFS-6717
 URL: https://issues.apache.org/jira/browse/HDFS-6717
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 2.4.0
Reporter: Jeff Hansen
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-6717.001.patch, HDFS-6717.more-change.patch, 
 HDFS-6717.more-change2.patch, HDFS-6717.more-change3.patch, 
 HdfsNfsGateway.html


 I believe this is just a matter of needing to update documentation. As a 
 result of https://issues.apache.org/jira/browse/HDFS-5804, the secure and 
 unsecure code paths appear to have been merged -- this is great because it 
 means less code to test. However, it means that the default unsecure behavior 
 requires additional configuration that needs to be documented. 
 I'm not the first to have trouble following the instructions documented in 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
 I kept hitting a RemoteException with the message that hdfs user cannot 
 impersonate root -- apparently under the old code, there was no impersonation 
 going on, so the nfs3 service could and should be run under the same user id 
 that runs hadoop (I assumed this meant the user id hdfs). However, with the 
 new unified code path, that would require hdfs to be able to impersonate root 
 (because root is always the local user that mounts a drive). The comments in 
 jira hdfs-5804 seem to indicate nobody has a problem with requiring the 
 nfsserver user to impersonate root -- if that means it's necessary for the 
 configuration to include root as a user nfsserver can impersonate, that 
 should be included in the setup instructions.
 More to the point, it appears to be absolutely necessary now to provision a 
 user named nfsserver in order to be able to give that nfsserver ability to 
 impersonate other users. Alternatively I think we'd need to configure hdfs to 
 be able to proxy other users. I'm not really sure what the best practice 
 should be, but it should be documented since it wasn't needed in the past.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5624) Add tests for ACLs in combination with viewfs.

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5624:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Add tests for ACLs in combination with viewfs.
 --

 Key: HDFS-5624
 URL: https://issues.apache.org/jira/browse/HDFS-5624
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: Chris Nauroth

 Add tests verifying that in a federated deployment, a viewfs wrapped over 
 multiple federated NameNodes will dispatch the ACL operations to the correct 
 NameNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6185) HDFS operational and debuggability improvements

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6185:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 HDFS operational and debuggability improvements
 ---

 Key: HDFS-6185
 URL: https://issues.apache.org/jira/browse/HDFS-6185
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 This umbrella jira proposes improvements in HDFS to make operations simpler 
 and the system more robust. The areas of improvements includes - ensuring 
 configuration is correct and follows the know best practices, make HDFS 
 robust against some of the failures observed due to cluster not being 
 monitored correctly, better metadata management to avoid lengthy startup time 
 etc.
 Subtasks will be filed with more details on individual improvements.
 The goal is simplify operations, improve debuggability and robustness. If you 
 have ideas on improvements that fall under this umbrella, please file 
 subtasks under this jira.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6359) WebHdfs NN servlet issues redirects in safemode or standby

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6359:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 WebHdfs NN servlet issues redirects in safemode or standby
 --

 Key: HDFS-6359
 URL: https://issues.apache.org/jira/browse/HDFS-6359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 Webhdfs does not check for safemode or standby during issuing a redirect for 
 open/create/checksum calls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6519) Document oiv_legacy command

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6519:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Document oiv_legacy command
 ---

 Key: HDFS-6519
 URL: https://issues.apache.org/jira/browse/HDFS-6519
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6519.patch


 HDFS-6293 introduced oiv_legacy command.
 The usage of the command should be included in OfflineImageViewer.apt.vm.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6122) Rebalance cached replicas between datanodes

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6122:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Rebalance cached replicas between datanodes
 ---

 Key: HDFS-6122
 URL: https://issues.apache.org/jira/browse/HDFS-6122
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: caching
Affects Versions: 2.3.0
Reporter: Andrew Wang
Assignee: Andrew Wang

 It'd be nice if the NameNode was able to rebalance cache usage among 
 datanodes. This would help avoid situations where the only three DNs with a 
 replica are full and there is still cache space on the rest of the cluster. 
 It'll also probably help for heterogeneous node sizes and when adding new 
 nodes to the cluster or doing a rolling restart.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6401) WebHdfs should always use the network failover policy

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6401:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 WebHdfs should always use the network failover policy
 -

 Key: HDFS-6401
 URL: https://issues.apache.org/jira/browse/HDFS-6401
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 Webhdfs only uses the network failover policy if HA is enabled.  The policy 
 adds retries for exceptions such as connect failures which are always useful. 
  The proxy also provides support for standby and retriable exceptions which 
 are required for HA IP-based failover because the client does not know if the 
 NN is HA capable or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6088) Add configurable maximum block count for datanode

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6088:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Add configurable maximum block count for datanode
 -

 Key: HDFS-6088
 URL: https://issues.apache.org/jira/browse/HDFS-6088
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee

 Currently datanode resources are protected by the free space check and the 
 balancer.  But datanodes can run out of memory simply storing too many 
 blocks. If the sizes of blocks are small, datanodes will appear to have 
 plenty of space to put more blocks.
 I propose adding a configurable max block count to datanode. Since datanodes 
 can have different heap configurations, it will make sense to make it 
 datanode-level, rather than something enforced by namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6450) Support non-positional hedged reads in HDFS

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6450:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Support non-positional hedged reads in HDFS
 ---

 Key: HDFS-6450
 URL: https://issues.apache.org/jira/browse/HDFS-6450
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Liang Xie

 HDFS-5776 added support for hedged positional reads.  We should also support 
 hedged non-position reads (aka regular reads).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6481:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 DatanodeManager#getDatanodeStorageInfos() should check the length of 
 storageIDs
 ---

 Key: HDFS-6481
 URL: https://issues.apache.org/jira/browse/HDFS-6481
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: hdfs-6481-v1.txt


 Ian Brooks reported the following stack trace:
 {code}
 2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
 /user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
  block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
 hdfs.DFSClient: DataStreamer Exception
 org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
  0
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
 at org.apache.hadoop.ipc.Client.call(Client.java:1347)
 at org.apache.hadoop.ipc.Client.call(Client.java:1300)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
 at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
 at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
 at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
 2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
 syncer encountered error, will retry. txid=211
 org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
  0
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)

[jira] [Updated] (HDFS-5809) BlockPoolSliceScanner and high speed hdfs appending make datanode to drop into infinite loop

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5809:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 BlockPoolSliceScanner and high speed hdfs appending make datanode to drop 
 into infinite loop
 

 Key: HDFS-5809
 URL: https://issues.apache.org/jira/browse/HDFS-5809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.0-alpha
 Environment: jdk1.6, centos6.4, 2.0.0-cdh4.5.0
Reporter: ikweesung
Assignee: Colin Patrick McCabe
Priority: Critical
  Labels: blockpoolslicescanner, datanode, infinite-loop
 Attachments: HDFS-5809.001.patch


 Hello, everyone.
 When hadoop cluster starts, BlockPoolSliceScanner start scanning the blocks 
 in my cluster.
 Then, randomly one datanode drop into infinite loop as the log show, and 
 finally all datanodes drop into infinite loop.
 Every datanode just verify fail by one block. 
 When i check the fail block like this : hadoop fsck / -files -blocks | grep 
 blk_1223474551535936089_4702249, no hdfs file contains the block.
 It seems that in while block of BlockPoolSliceScanner's scan method drop into 
 infinite loop .
 BlockPoolSliceScanner: 650
 while (datanode.shouldRun
  !datanode.blockScanner.blockScannerThread.isInterrupted()
  datanode.isBPServiceAlive(blockPoolId)) { 
 The log finally printed in method verifyBlock(BlockPoolSliceScanner:453).
 Please excuse my poor English.
 -
 LOG: 
 2014-01-21 18:36:50,582 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
 failed for 
 BP-1040548460-58.229.158.13-1385606058039:blk_6833233229840997944_4702634 - 
 may be due to race with write
 2014-01-21 18:36:50,582 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
 failed for 
 BP-1040548460-58.229.158.13-1385606058039:blk_6833233229840997944_4702634 - 
 may be due to race with write
 2014-01-21 18:36:50,582 INFO 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification 
 failed for 
 BP-1040548460-58.229.158.13-1385606058039:blk_6833233229840997944_4702634 - 
 may be due to race with write



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6613) Improve logging in caching classes

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6613:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Improve logging in caching classes
 --

 Key: HDFS-6613
 URL: https://issues.apache.org/jira/browse/HDFS-6613
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: caching
Affects Versions: 2.4.1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: HDFS-6613.001.patch, HDFS-6613.002.patch


 Some of the log prints in the caching classes are incorrect or confusing, 
 could use some improvement.
 Also adding some trace logging to help debug intermittent TestCacheDirectives 
 failures, since I can't repro it locally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6151) HDFS should refuse to cache blocks =2GB

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6151:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 HDFS should refuse to cache blocks =2GB
 

 Key: HDFS-6151
 URL: https://issues.apache.org/jira/browse/HDFS-6151
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching, datanode
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang

 If you try to cache a block that's =2GB, the DN will silently fail to cache 
 it since {{MappedByteBuffer}} uses a signed int to represent size. Blocks 
 this large are rare, but we should log or alert the user somehow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6526) Implement HDFS TtlManager

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6526:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Implement HDFS TtlManager
 -

 Key: HDFS-6526
 URL: https://issues.apache.org/jira/browse/HDFS-6526
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6526.1.patch


 This issue is used to track development of HDFS TtlManager, for details see 
 HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6482) Use block ID-based block layout on datanodes

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6482:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Use block ID-based block layout on datanodes
 

 Key: HDFS-6482
 URL: https://issues.apache.org/jira/browse/HDFS-6482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.0
Reporter: James Thomas
Assignee: James Thomas
 Attachments: HDFS-6482.1.patch, HDFS-6482.2.patch, HDFS-6482.3.patch, 
 HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, HDFS-6482.patch


 Right now blocks are placed into directories that are split into many 
 subdirectories when capacity is reached. Instead we can use a block's ID to 
 determine the path it should go in. This eliminates the need for the LDir 
 data structure that facilitates the splitting of directories when they reach 
 capacity as well as fields in ReplicaInfo that keep track of a replica's 
 location.
 An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6440:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: hdfs-6440-cdh-4.5-full.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6517) Remove hadoop-metrics2.properties from hdfs project

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6517:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Remove hadoop-metrics2.properties from hdfs project
 ---

 Key: HDFS-6517
 URL: https://issues.apache.org/jira/browse/HDFS-6517
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6517.patch


 HDFS-side of HADOOP-9919.
 HADOOP-9919 updated hadoop-metrics2.properties examples to YARN, however, the 
 examples are still old because hadoop-metrics2.properties in HDFS project is 
 actually packaged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6358) WebHdfs DN's DFSClient should not use a retry policy

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6358:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 WebHdfs DN's DFSClient should not use a retry policy
 

 Key: HDFS-6358
 URL: https://issues.apache.org/jira/browse/HDFS-6358
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 DFSClient retries on the DN are useless.  The webhdfs client is going to 
 timeout before the retries complete.  The DFSClient will also continue to run 
 until it timeouts.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6569) OOB message can't be sent to the client when DataNode shuts down for upgrade

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6569:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 OOB message can't be sent to the client when DataNode shuts down for upgrade
 

 Key: HDFS-6569
 URL: https://issues.apache.org/jira/browse/HDFS-6569
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.4.0
Reporter: Brandon Li
Assignee: Kihwal Lee
 Attachments: test-hdfs-6569.patch


 The socket is closed too early before the OOB message can be sent to client, 
 which causes the write pipeline failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6221) Webhdfs should recover from dead DNs

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6221:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Webhdfs should recover from dead DNs
 

 Key: HDFS-6221
 URL: https://issues.apache.org/jira/browse/HDFS-6221
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp

 We've repeatedly observed the jetty acceptor thread silently dying in the 
 DNs.  The webhdfs servlet may also disappear and jetty returns non-json 
 404s.
 One approach to make webhdfs more resilient to bad DNs is dfsclient-like 
 fetching of block locations to directly access the DNs instead of relying on 
 a NN redirect that may repeatedly send the client to the same faulty DN(s).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5995) TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError and dumps heap.

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5995:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError 
 and dumps heap.
 

 Key: HDFS-5995
 URL: https://issues.apache.org/jira/browse/HDFS-5995
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode, test
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HDFS-5995.1.patch


 {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing 
 {{OutOfMemoryError}} and dumping heap since the merge of HDFS-4685.  This 
 doesn't actually cause the test to fail, because it's a failure test that 
 corrupts an edit log intentionally.  Still, this might cause confusion if 
 someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6310:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 PBImageXmlWriter should output information about Delegation Tokens
 --

 Key: HDFS-6310
 URL: https://issues.apache.org/jira/browse/HDFS-6310
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6310.patch


 Separated from HDFS-6293.
 The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
 option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6333) REST API for fetching directory listing file from NN

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6333:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 REST API for fetching directory listing file from NN
 

 Key: HDFS-6333
 URL: https://issues.apache.org/jira/browse/HDFS-6333
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Andrew Wang

 It'd be convenient if the NameNode supported fetching the directory listing 
 via HTTP.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6128) Implement libhdfs bindings for HDFS ACL APIs.

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6128:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Implement libhdfs bindings for HDFS ACL APIs.
 -

 Key: HDFS-6128
 URL: https://issues.apache.org/jira/browse/HDFS-6128
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.4.0
Reporter: Chris Nauroth





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6376:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
 Fix For: 3.0.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-branch-2.4.patch, HDFS-6376-patch-1.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6525) FsShell supports HDFS TTL

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6525:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 FsShell supports HDFS TTL
 -

 Key: HDFS-6525
 URL: https://issues.apache.org/jira/browse/HDFS-6525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, tools
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6525.1.patch, HDFS-6525.2.patch


 This issue is used to track development of supporting  HDFS TTL for FsShell, 
 for details see HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6292) Display HDFS per user and per group usage on the webUI

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6292:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Display HDFS per user and per group usage on the webUI
 --

 Key: HDFS-6292
 URL: https://issues.apache.org/jira/browse/HDFS-6292
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-6292.patch, HDFS-6292.png


 It would be nice to show HDFS usage per user and per group on a web ui.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6212) Deprecate the BackupNode and CheckpointNode from branch-2

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6212:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Deprecate the BackupNode and CheckpointNode from branch-2
 -

 Key: HDFS-6212
 URL: https://issues.apache.org/jira/browse/HDFS-6212
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.3.0
Reporter: Jing Zhao

 As per discussion in HDFS-4114, this jira tries to deprecate BackupNode from 
 branch-2 and change the hadoop start/stop scripts to print deprecation 
 warning.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6220) Webhdfs should differentiate remote exceptions

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6220:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Webhdfs should differentiate remote exceptions
 --

 Key: HDFS-6220
 URL: https://issues.apache.org/jira/browse/HDFS-6220
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp

 Webhdfs's validateResponse should use a distinct exception for json decoded 
 exceptions than it does for servlet exceptions.  Ex. A servlet generated 404 
 with a json payload should be distinguishable from a http proxy or jetty 
 generated 404.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6624) TestBlockTokenWithDFS#testEnd2End fails sometimes

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6624:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 TestBlockTokenWithDFS#testEnd2End fails sometimes
 -

 Key: HDFS-6624
 URL: https://issues.apache.org/jira/browse/HDFS-6624
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
 Attachments: PreCommit-HDFS-Build #7274 test - testEnd2End 
 [Jenkins].html


 On a recent test-patch.sh run, saw this error which did not repro locally:
 {noformat}
 Error Message
 Rebalancing expected avg utilization to become 0.2, but on datanode 
 127.0.0.1:57889 it remains at 0.08 after more than 4 msec.
 Stacktrace
 java.util.concurrent.TimeoutException: Rebalancing expected avg utilization 
 to become 0.2, but on datanode 127.0.0.1:57889 it remains at 0.08 after more 
 than 4 msec.
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForBalancer(TestBalancer.java:284)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:382)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:359)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:403)
   at 
 org.apache.hadoop.hdfs.server.balancer.TestBalancer.integrationTest(TestBalancer.java:416)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.testEnd2End(TestBlockTokenWithDFS.java:588)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2856) Fix block protocol so that Datanodes don't require root or jsvc

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-2856:
---

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Fix block protocol so that Datanodes don't require root or jsvc
 ---

 Key: HDFS-2856
 URL: https://issues.apache.org/jira/browse/HDFS-2856
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, security
Affects Versions: 3.0.0, 2.4.0
Reporter: Owen O'Malley
Assignee: Chris Nauroth
 Attachments: Datanode-Security-Design.pdf, 
 Datanode-Security-Design.pdf, Datanode-Security-Design.pdf, 
 HDFS-2856-Test-Plan-1.pdf, HDFS-2856.1.patch, HDFS-2856.2.patch, 
 HDFS-2856.3.patch, HDFS-2856.4.patch, HDFS-2856.5.patch, HDFS-2856.6.patch, 
 HDFS-2856.prototype.patch


 Since we send the block tokens unencrypted to the datanode, we currently 
 start the datanode as root using jsvc and get a secure ( 1024) port.
 If we have the datanode generate a nonce and send it on the connection and 
 the sends an hmac of the nonce back instead of the block token it won't 
 reveal any secrets. Thus, we wouldn't require a secure port and would not 
 require root or jsvc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6332) Support protobuf-based directory listing output suitable for OIV

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-6332:
---

Target Version/s: 2.6.0  (was: 2.5.0)

 Support protobuf-based directory listing output suitable for OIV
 

 Key: HDFS-6332
 URL: https://issues.apache.org/jira/browse/HDFS-6332
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Andrew Wang

 The initial listing output may be in JSON, but protobuf would be a better 
 serialization format down the road.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5957) Provide support for different mmap cache retention policies in ShortCircuitCache.

2014-02-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13908812#comment-13908812
 ] 

Karthik Kambatla commented on HDFS-5957:


Created YARN-1747.

 Provide support for different mmap cache retention policies in 
 ShortCircuitCache.
 -

 Key: HDFS-5957
 URL: https://issues.apache.org/jira/browse/HDFS-5957
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: Chris Nauroth

 Currently, the {{ShortCircuitCache}} retains {{mmap}} regions for reuse by 
 multiple reads of the same block or by multiple threads.  The eventual 
 {{munmap}} executes on a background thread after an expiration period.  Some 
 client usage patterns would prefer strict bounds on this cache and 
 deterministic cleanup by calling {{munmap}}.  This issue proposes additional 
 support for different caching policies that better fit these usage patterns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HDFS-2261) AOP unit tests are not getting compiled or run

2014-01-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HDFS-2261:
--

Assignee: (was: Karthik Kambatla)

 AOP unit tests are not getting compiled or run 
 ---

 Key: HDFS-2261
 URL: https://issues.apache.org/jira/browse/HDFS-2261
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha, 2.0.4-alpha
 Environment: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
 -compile-fault-inject ant target 
Reporter: Giridharan Kesavan
Priority: Minor
 Attachments: hdfs-2261.patch


 The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5715) Use Snapshot ID to indicate the corresponding Snapshot for a FileDiff/DirectoryDiff

2014-01-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13864768#comment-13864768
 ] 

Karthik Kambatla commented on HDFS-5715:


Looks like this breaks mvn clean install -DskipTests fails after this patch. 
[~jingzhao] - can you look into it? 

 Use Snapshot ID to indicate the corresponding Snapshot for a 
 FileDiff/DirectoryDiff
 ---

 Key: HDFS-5715
 URL: https://issues.apache.org/jira/browse/HDFS-5715
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5715.000.patch, HDFS-5715.001.patch, 
 HDFS-5715.002.patch


 Currently FileDiff and DirectoryDiff both contain a snapshot object reference 
 to indicate its associated snapshot. Instead, we can simply record the 
 corresponding snapshot id there. This can simplify some logic and allow us to 
 use a byte array to represent the snapshot feature (HDFS-5714).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5715) Use Snapshot ID to indicate the corresponding Snapshot for a FileDiff/DirectoryDiff

2014-01-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13864853#comment-13864853
 ] 

Karthik Kambatla commented on HDFS-5715:


Interesting! maven - 3.0.3, jdk - 1.7.0_40

{noformat}
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[32,48]
 OutputFormat is internal proprietary API and may be removed in a future release
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[33,48]
 XMLSerializer is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiff.java:[134,53]
 error: snapshotId has private access in AbstractINodeDiff
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[55,4]
 OutputFormat is internal proprietary API and may be removed in a future release
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[55,33]
 OutputFormat is internal proprietary API and may be removed in a future release
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[59,4]
 XMLSerializer is internal proprietary API and may be removed in a future 
release
[ERROR] 
/home/kasha/code/hadoop-trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/XmlEditsVisitor.java:[59,35]
 XMLSerializer is internal proprietary API and may be removed in a future 
release
[INFO] 7 errors 
{noformat}

 Use Snapshot ID to indicate the corresponding Snapshot for a 
 FileDiff/DirectoryDiff
 ---

 Key: HDFS-5715
 URL: https://issues.apache.org/jira/browse/HDFS-5715
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5715.000.patch, HDFS-5715.001.patch, 
 HDFS-5715.002.patch


 Currently FileDiff and DirectoryDiff both contain a snapshot object reference 
 to indicate its associated snapshot. Instead, we can simply record the 
 corresponding snapshot id there. This can simplify some logic and allow us to 
 use a byte array to represent the snapshot feature (HDFS-5714).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5715) Use Snapshot ID to indicate the corresponding Snapshot for a FileDiff/DirectoryDiff

2014-01-07 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13864856#comment-13864856
 ] 

Karthik Kambatla commented on HDFS-5715:


Just verified it builds fine with JDK6. To make sure it is this JIRA, I dropped 
the commit and it builds fine against JDK7 as well.

 Use Snapshot ID to indicate the corresponding Snapshot for a 
 FileDiff/DirectoryDiff
 ---

 Key: HDFS-5715
 URL: https://issues.apache.org/jira/browse/HDFS-5715
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5715.000.patch, HDFS-5715.001.patch, 
 HDFS-5715.002.patch


 Currently FileDiff and DirectoryDiff both contain a snapshot object reference 
 to indicate its associated snapshot. Instead, we can simply record the 
 corresponding snapshot id there. This can simplify some logic and allow us to 
 use a byte array to represent the snapshot feature (HDFS-5714).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2013-10-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13799481#comment-13799481
 ] 

Karthik Kambatla commented on HDFS-2261:


Just checking again if we have a decision here. I think we should remove these 
tests or have a concrete plan on enabling them. Otherwise, one call always pull 
them in from previous branches.

 AOP unit tests are not getting compiled or run 
 ---

 Key: HDFS-2261
 URL: https://issues.apache.org/jira/browse/HDFS-2261
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha, 2.0.4-alpha
 Environment: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
 -compile-fault-inject ant target 
Reporter: Giridharan Kesavan
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: hdfs-2261.patch


 The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5082) Move the version info of zookeeper test dependency to hadoop-project/pom

2013-08-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5082:
---

Attachment: hdfs-5082-1.patch

Straight-forward patch.

 Move the version info of zookeeper test dependency to hadoop-project/pom
 

 Key: HDFS-5082
 URL: https://issues.apache.org/jira/browse/HDFS-5082
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: hdfs-5082-1.patch


 As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
 the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5082) Move the version info of zookeeper test dependency to hadoop-project/pom

2013-08-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-5082:
---

Status: Patch Available  (was: Open)

 Move the version info of zookeeper test dependency to hadoop-project/pom
 

 Key: HDFS-5082
 URL: https://issues.apache.org/jira/browse/HDFS-5082
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: hdfs-5082-1.patch


 As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
 the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5082) Move the version info of zookeeper test dependency to hadoop-project/pom

2013-08-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13735495#comment-13735495
 ] 

Karthik Kambatla commented on HDFS-5082:


Didn't include any tests as this is just a pom change.

 Move the version info of zookeeper test dependency to hadoop-project/pom
 

 Key: HDFS-5082
 URL: https://issues.apache.org/jira/browse/HDFS-5082
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: hdfs-5082-1.patch


 As different projects (HDFS, YARN) depend on zookeeper, it is better to keep 
 the version information in hadoop-project/pom.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >