[jira] [Resolved] (HDFS-16706) ViewFS doc points to wrong mount table name

2022-09-21 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-16706.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

> ViewFS doc points to wrong mount table name
> ---
>
> Key: HDFS-16706
> URL: https://issues.apache.org/jira/browse/HDFS-16706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> ViewFS Doc - 
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html
>  specifies the view name as *clusterX* where as mount table name as 
> *ClusterX*. This will lead to error "ls: ViewFs: Cannot initialize: Empty 
> Mount table in config for viewfs://clusterX/"
> {code}
> 
>   fs.defaultFS
>   viewfs://clusterX
> 
> 
> fs.viewfs.mounttable.ClusterX.link./data
> hdfs://nn1-clusterx.example.com:8020/data
>   
> {code}
> The mountable name also has to be same as view name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16706) ViewFS doc points to wrong mount table name

2022-08-23 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph reassigned HDFS-16706:


Assignee: Samrat Deb

> ViewFS doc points to wrong mount table name
> ---
>
> Key: HDFS-16706
> URL: https://issues.apache.org/jira/browse/HDFS-16706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Samrat Deb
>Priority: Minor
>
> ViewFS Doc - 
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html
>  specifies the view name as *clusterX* where as mount table name as 
> *ClusterX*. This will lead to error "ls: ViewFs: Cannot initialize: Empty 
> Mount table in config for viewfs://clusterX/"
> {code}
> 
>   fs.defaultFS
>   viewfs://clusterX
> 
> 
> fs.viewfs.mounttable.ClusterX.link./data
> hdfs://nn1-clusterx.example.com:8020/data
>   
> {code}
> The mountable name also has to be same as view name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16706) ViewFS doc points to wrong mount table name

2022-07-31 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HDFS-16706:


 Summary: ViewFS doc points to wrong mount table name
 Key: HDFS-16706
 URL: https://issues.apache.org/jira/browse/HDFS-16706
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: Prabhu Joseph


ViewFS Doc - 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html
 specifies the view name as *clusterX* where as mount table name as *ClusterX*. 
This will lead to error "ls: ViewFs: Cannot initialize: Empty Mount table in 
config for viewfs://clusterX/"

{code}

  fs.defaultFS
  viewfs://clusterX



fs.viewfs.mounttable.ClusterX.link./data
hdfs://nn1-clusterx.example.com:8020/data
  
{code}

The mountable name also has to be same as view name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16676) DatanodeAdminManager$Monitor reports a node as invalid continuously

2022-07-21 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16676:
-
Summary: DatanodeAdminManager$Monitor reports a node as invalid 
continuously  (was: DatanodeAdminManager$Monitor reports a node as invalid 
forever)

> DatanodeAdminManager$Monitor reports a node as invalid continuously
> ---
>
> Key: HDFS-16676
> URL: https://issues.apache.org/jira/browse/HDFS-16676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.1
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> DatanodeAdminManager$Monitor reports a node as invalid forever
> {code}
> 2022-07-21 06:54:38,562 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
> (DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when 
> processing node 1.2.3.4:9866.
> java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
> Invalid state: In Service 0 blocks are on this dn.
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:750)
> {code}
> A node goes into invalid state when stopDecommission sets the node to 
> IN-Service and misses to remove from pendingNodes queues (HDFS-16675). This 
> will be corrected only when user triggers startDecommission. Till then we 
> need not keep the invalid state node in the queue as anyway startDecommission 
> will add it back.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16676) DatanodeAdminManager$Monitor reports a node as invalid continuously

2022-07-21 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16676:
-
Description: 
DatanodeAdminManager$Monitor reports a node as invalid continuously

{code}
2022-07-21 06:54:38,562 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
(DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when processing 
node 1.2.3.4:9866.
java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
Invalid state: In Service 0 blocks are on this dn.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{code}

A node goes into invalid state when stopDecommission sets the node to 
IN-Service and misses to remove from pendingNodes queues (HDFS-16675). This 
will be corrected only when user triggers startDecommission. Till then we need 
not keep the invalid state node in the queue as anyway startDecommission will 
add it back.

  was:
DatanodeAdminManager$Monitor reports a node as invalid forever

{code}
2022-07-21 06:54:38,562 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
(DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when processing 
node 1.2.3.4:9866.
java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
Invalid state: In Service 0 blocks are on this dn.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{code}

A node goes into invalid state when stopDecommission sets the node to 
IN-Service and misses to remove from pendingNodes queues (HDFS-16675). This 
will be corrected only when user triggers startDecommission. Till then we need 
not keep the invalid state node in the queue as anyway startDecommission will 
add it back.


> DatanodeAdminManager$Monitor reports a node as invalid continuously
> ---
>
> Key: HDFS-16676
> URL: https://issues.apache.org/jira/browse/HDFS-16676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.1
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> DatanodeAdminManager$Monitor reports a node as invalid continuously
> {code}
> 2022-07-21 06:54:38,562 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
> (DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when 
> processing node 1.2.3.4:9866.
> java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
> Invalid state: In Service 0 blocks are on this dn.
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.ru

[jira] [Created] (HDFS-16676) DatanodeAdminManager$Monitor reports a node as invalid forever

2022-07-21 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HDFS-16676:


 Summary: DatanodeAdminManager$Monitor reports a node as invalid 
forever
 Key: HDFS-16676
 URL: https://issues.apache.org/jira/browse/HDFS-16676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.2.1
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


DatanodeAdminManager$Monitor reports a node as invalid forever

{code}
2022-07-21 06:54:38,562 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
(DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when processing 
node 1.2.3.4:9866.
java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
Invalid state: In Service 0 blocks are on this dn.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{code}

A node goes into invalid state when stopDecommission sets the node to 
IN-Service and misses to remove from pendingNodes queues (HDFS-16675). This 
will be corrected only when user triggers startDecommission. Till then we need 
not keep the invalid state node in the queue as anyway startDecommission will 
add it back.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16675) DatanodeAdminManager.stopDecommission fails with ConcurrentModificationException

2022-07-21 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HDFS-16675:


 Summary: DatanodeAdminManager.stopDecommission fails with 
ConcurrentModificationException
 Key: HDFS-16675
 URL: https://issues.apache.org/jira/browse/HDFS-16675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namanode
Affects Versions: 3.2.1
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


DatanodeAdminManager.stopDecommission intermittently fails with 
ConcurrentModificationException. 

{code}
java.util.ConcurrentModificationException: Tree has been modified outside of 
iterator
at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.next(FoldedTreeSet.java:262)
at 
java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processExtraRedundancyBlocksOnInService(BlockManager.java:4300)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager.stopDecommission(DatanodeAdminManager.java:241)

{code}

This is intermittently seen on busy cluster with AutoScale Enabled. This leads 
to a Node with state "IN Service" in pendingNodes queue 

{code}
2022-07-21 06:54:38,562 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager 
(DatanodeAdminMonitor-0): DatanodeAdminMonitor caught exception when processing 
node 1.2.3.4:9866.
java.lang.IllegalStateException: Node 1.2.3.4:9866 is in an invalid state! 
Invalid state: In Service 0 blocks are on this dn.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.check(DatanodeAdminManager.java:601)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminManager$Monitor.run(DatanodeAdminManager.java:504)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-16633.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space of the Replica from the Volume. 
> DataXceiver#writeBlock finally will call BlockReceiver#close which will check 
> if the ReplicaInfo has any remaining reserved space, if so release from the 
> Volume.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16633:
-
Description: 
Have found the Reserved Space For Replicas is not released on some cases in a 
Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the 
issue is not completely fixed. Have tried to debug the root cause but this will 
take lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space of the Replica from the Volume. DataXceiver#writeBlock 
finally will call BlockReceiver#close which will check if the ReplicaInfo has 
any remaining reserved space, if so release from the Volume.



  was:
Have found the Reserved Space For Replicas is not released on some cases in a 
Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the 
issue is not completely fixed. Have tried to debug the root cause but this will 
take lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally. 




> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space of the Replica from the Volume. 
> DataXceiver#writeBlock finally will call BlockReceiver#close which will check 
> if the ReplicaInfo has any remaining reserved space, if so release from the 
> Volume.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16616) Remove the use if Sets#newHashSet and Sets#newTreeSet

2022-06-21 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-16616.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Thanks [~samrat007]  for the patch. Have committed it to trunk.

> Remove the use if Sets#newHashSet and Sets#newTreeSet 
> --
>
> Key: HDFS-16616
> URL: https://issues.apache.org/jira/browse/HDFS-16616
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> As part of removing guava dependencies  HADOOP-17115, HADOOP-17721, 
> HADOOP-17722 and HADOOP-17720 are fixed,
> Currently the code call util function to create HashSet and TreeSet in the 
> repo . These function calls dont have much importance as it is calling 
> internally new HashSet<> / new TreeSet<> from java.utils 
> This task is to clean up all the function calls to create sets which is 
> redundant 
> Before moving to java8 , sets were created using guava functions and API , 
> now since this is moved away and util code in the hadoop now looks like
> 1. 
> public static  TreeSet newTreeSet() {  return new 
> TreeSet(); 
> 2. 
> public static  HashSet newHashSet()
> { return new HashSet(); }
> These interfaces dont do anything much just a extra layer of function call 
> please refer to the task 
> https://issues.apache.org/jira/browse/HADOOP-17726
> Can anyone review if this ticket add some value in the code. 
> Looking forward to some input/ thoughts . If not adding any value we can 
> close it and not move forward with changes !



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16635) Fix javadoc error in Java 11

2022-06-19 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-16635.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Thanks [~aajisaka]  for reporting the issue and [~groot]  for the patch.

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16633:
-
Description: 
Have found the Reserved Space For Replicas is not released on some cases in a 
Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the 
issue is not completely fixed. Have tried to debug the root cause but this will 
take lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally. 



  was:
Have found the Reserved Space For Replicas is not released on a Cx Prod 
cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the issue 
is not completely fixed. Have tried to debug the root cause but this will take 
lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally. 




> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-16 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16633:
-
Summary: Reserved Space For Replicas is not released on some cases  (was: 
Reserved Space For Replicas is not released )

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> Have found the Reserved Space For Replicas is not released on a Cx Prod 
> cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the issue 
> is not completely fixed. Have tried to debug the root cause but this will 
> take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16633) Reserved Space For Replicas is not released

2022-06-16 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HDFS-16633:


 Summary: Reserved Space For Replicas is not released 
 Key: HDFS-16633
 URL: https://issues.apache.org/jira/browse/HDFS-16633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.2
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


Have found the Reserved Space For Replicas is not released on a Cx Prod 
cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the issue 
is not completely fixed. Have tried to debug the root cause but this will take 
lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally. 





--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Ignore AuthenticationFilterInitializer for HttpFSServerWebServer and honor hadoop.http.authentication configs

2019-09-24 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16937438#comment-16937438
 ] 

Prabhu Joseph commented on HDFS-14845:
--

Thanks [~aajisaka].

> Ignore AuthenticationFilterInitializer for HttpFSServerWebServer and honor 
> hadoop.http.authentication configs
> -
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch, HDFS-14845-005.patch, 
> HDFS-14845-006.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-24 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936477#comment-16936477
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~aajisaka] Have updated the javadoc in  [^HDFS-14845-006.patch] . Thanks.

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch, HDFS-14845-005.patch, 
> HDFS-14845-006.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-24 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-006.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch, HDFS-14845-005.patch, 
> HDFS-14845-006.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936449#comment-16936449
 ] 

Prabhu Joseph commented on HDFS-14845:
--

Thanks [~eyang] and [~aajisaka] for reviewing. Have added deprecated properties 
in DeprecatedProperties.md in  [^HDFS-14845-005.patch] . 

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch, HDFS-14845-005.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-005.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch, HDFS-14845-005.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935629#comment-16935629
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~eyang] 1. Have replaced httpfs.authentication.* configs with 
hadoop.http.authentication.* in httpfs-default.xml instead of removing so that 
the above issue won't happen and also it is backward compatible. User can 
specify either httpfs.authentication.* or hadoop.http.authentication.* in 
httpfs-site.xml, if not specified, default configs will be taken from 
httpfs-default.xml.
 
   2. Have added httpfs.authentication.* as deprecated in 
httpfs-default.xml.



> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-004.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-20 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16934376#comment-16934376
 ] 

Prabhu Joseph commented on HDFS-14845:
--

Thanks [~eyang] for detailed review.
{quote}All HttpFS unit tests are passing on my system. Which test requires a 
separate ticket?
{quote}
I was trying to add a new class HttpFSAuthenticationFilterInitializer which 
adds the HttpFSAuthenticationFilter instead of hardcoding in web.xml 
(authFilter tag). With this new changes, tests were failing. Have ignored those 
changes for now.
{quote}I think some logic to map the configuration are missing in patch 002.
{quote}
The httpfs.authentication.* configs are getting populated from 
httpfs-default.xml in HttpFSServerWebApp#init() -> Server#initConfig()
{code:java}
String defaultConfig = name + "-default.xml";
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
InputStream inputStream = classLoader.getResourceAsStream(defaultConfig);
{code}
So always httpfs.authentication.* configs are present even if it is not defined 
in httpfs-site.xml. Have tried to ignore httpfs-default.xml but some default 
configs are required for startup.

One way it works is by removing httpfs.authentication.* set of configs from 
httpfs-default.xml but user has to ensure these configs are defined in 
https-site.xml either using prefix httpfs.authentication or 
hadoop.http.authentication.  Have attached [^HDFS-14845-003.patch] with this 
changes. Please let me know if this approach is fine.

 

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-20 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-003.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-17 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931365#comment-16931365
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~eyang] [~aajisaka] Thanks for the review comments.

[~eyang] 1. Have tried with a new filter initializer 
{{HttpFSAuthenticationFilterInitializer}} which adds the filter 
{{HttpFSAuthenticationFilter}} and initializes the filter configs and this 
works fine. But most of the testcases related to {{HttpFSServerWebServer}} (eg: 
{{TestHttpFSServer}}) requires more changes as they did not use {{HttpServer2}} 
and so the filter initializers are not called, instead it uses a Test Jetty 
Server with {{HttpFSServerWebApp}} which are failing as the filter won't have 
any configs.

Please let me know if we can handle this in a separate improvement Jira.

2. Have changed the {{HttpFSAuthenticationFilter$getConfiguration}} to honor 
the {{hadoop.http.authentication}} configs which will be overridden by 
{{httpfs.authentication}} configs.

Have attached  [^HDFS-14845-002.patch] with changes to ignore 
{{AuthenticationFilterInitializer}} and (2).

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-17 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-002.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-14 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929738#comment-16929738
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~eyang] HttpFSAuthenticationFilter supports JWTRedirectAuthenticationHandler 
by setting it in httpfs.authentication.type (similar to simple or kerberos).

AuthenticationFilterInitializer or ProxyUserAuthenticationFilterInitializer can 
be made default for HttpFS but will will miss support of httpfs.authentication 
specific configs and WebHdfs Deletagion Token provided by 
HttpFSAuthenticationFilter.










> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-13 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929120#comment-16929120
 ] 

Prabhu Joseph edited comment on HDFS-14845 at 9/13/19 11:25 AM:


Thanks for sharing the details. That helped to repro the issue. The issue 
happens as {{AuthenticationFilter}} (Kerberos handler) is called twice.

1. HttpFSAuthenticationFilter (httpfs.authentication.type=kerberos)
2. AuthenticationFilterInitializer added in hadoop.http.filter.initializer 
(hadoop.http.authentication.type=kerberos)

The default {{HttpFSAuthenticationFilter}} itself a combination of Kerberos + 
Delegation + Proxy support. And hence the {{AuthenticationFilterInitializer}} 
and {{ProxyUserAuthenticationFilterInitializer}} are not required for 
{{HttpFSServerWebServer}}. 

Workaround is to remove them from hadoop.http.filter.initializer  in 
core-site.xml.
Have prepared a patch to do the same if this is configured.


cc [~eyang].




was (Author: prabhu joseph):
Thanks for sharing the details. That helped to repro the issue. The issue 
happens as {{AuthenticationFilter}} (Kerberos handler) is called twice.

1. HttpFSAuthenticationFilter (httpfs.authentication.type=kerberos)
2. AuthenticationFilterInitializer added in hadoop.http.filter.initializer 
(hadoop.http.authentication.type=kerberos)

The default {{HttpFSAuthenticationFilter}} itself a combination of Kerberos + 
Delegation + Proxy support. And hence the {{AuthenticationFilterInitializer}} 
and {{ProxyUserAuthenticationFilterInitializer}} are not required for 
{{HttpFSServerWebServer}}. 

Workaround is to remove them from hadoop.http.filter.initializer  in 
core-site.xml.
Have prepared a patch to do the same if this is configured.




> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-13 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-14845:
-
Attachment: HDFS-14845-001.patch

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-13 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929120#comment-16929120
 ] 

Prabhu Joseph commented on HDFS-14845:
--

Thanks for sharing the details. That helped to repro the issue. The issue 
happens as {{AuthenticationFilter}} (Kerberos handler) is called twice.

1. HttpFSAuthenticationFilter (httpfs.authentication.type=kerberos)
2. AuthenticationFilterInitializer added in hadoop.http.filter.initializer 
(hadoop.http.authentication.type=kerberos)

The default {{HttpFSAuthenticationFilter}} itself a combination of Kerberos + 
Delegation + Proxy support. And hence the {{AuthenticationFilterInitializer}} 
and {{ProxyUserAuthenticationFilterInitializer}} are not required for 
{{HttpFSServerWebServer}}. 

Workaround is to remove them from hadoop.http.filter.initializer  in 
core-site.xml.
Have prepared a patch to do the same if this is configured.




> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Priority: Critical
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-11 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927443#comment-16927443
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~aajisaka] Am trying to reproduce in test cluster. Can you share below details 
which will help to debug. Thanks.

1. Value of hadoop.http.filter.initializers set in core-site.xml
2. Hdfs Log file | grep -i filter

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Priority: Critical
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-11 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927405#comment-16927405
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~aajisaka] I will check this issue and update. Thanks.

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Priority: Critical
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-06-02 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-14525.
--
Resolution: Not A Problem

> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-31 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16853253#comment-16853253
 ] 

Prabhu Joseph commented on HDFS-14525:
--

Thanks [~eyang] for the clarifications. Will close this Jira as Working as 
designed. 

> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-31 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16852854#comment-16852854
 ] 

Prabhu Joseph commented on HDFS-14525:
--

[~eyang] Thanks for the inputs.

1. As per my understanding, hadoop.security.authentication is specific to RPC 
Authentication whereas hadoop.http.authentication.type is specific to HTTP 
Authentication. We have simple, kerberos authentication for RPC whereas HTTP 
Authentication can be simple, kerberos, ldap (LdapAuthenticationHandler), 
WebSSO (JWTRedirectAuthenticationHandler) which is used for Knox and also can 
be custom. Customers uses ldap or websso for HTTP and kerberos for RPC. I think 
we need a separate config for HTTP to have different authentication behavior 
from RPC.

And as per the testing, fixing below two places should be fine.

1. HttpServer2 does Kerberos initSpnego when hadoop.security.authentication is 
kerberos which can cause http requests (Pseudo) failing with "Authentication 
Required". Will fix this in Hadoop-16314.

2. JspHelper fails Anonymous user requests even though the http request is 
successfully authenticated by PseudoAuthenticationHandler.

Please let me know how to proceed further. Thanks.









> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-30 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16852267#comment-16852267
 ] 

Prabhu Joseph commented on HDFS-14525:
--

bq. You actually want a secure cluster to accept anonymous users?  Why do you 
even have security enabled?

Then why we have a separate config 
hadoop.http.authentication.simple.anonymous.allowed which adds complexity in 
testing all the scenarios while making new changes.

Yes the proposed change is wrong. I think the below will work.

{code}
 UserGroupInformation.isSecurityEnabled()  && 
!conf.get(hadoop.http.authentication.type).equals("simple") 
{code}





> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-30 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HDFS-14525:


 Summary: JspHelper ignores hadoop.http.authentication.type
 Key: HDFS-14525
 URL: https://issues.apache.org/jira/browse/HDFS-14525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.2.0
Reporter: Prabhu Joseph


On Secure Cluster With hadoop.http.authentication.type simple and 
hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
when user.name is not set. It runs fine if user.name=ambari-qa is set..

{code}

[knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
'Content-Length: 0' --negotiate -u : 
'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
{"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
 to obtain user group information: java.io.IOException: Security enabled but 
user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 

{code}

JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
is Secure causing the issue.








--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org