[jira] [Updated] (HADOOP-13346) DelegationTokenAuthenticationHandler writes via closed writer

2016-07-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-13346:

Attachment: HADOOP-13346.patch

Here's a patch with a test.  The test fails without the code changes and 
demonstrates that a Writer that checks whether it has been closed will cause a 
problem with the current code.

There are a number of ways to fix this, I simply added a configuration step 
that checks if any of jackson's JsonFactory features are configured, and in 
turn configures them for the DelegationTokenAuthenticationHandler.  This lets 
you specify arbitrary jackson features, which is nice, but it does have some 
risk in terms of feature names changing.  Alternatively, you could define a 
static mapping from features to auth handler configs, or allow the user to pass 
in a JsonFactory to init (although the code currently casts everything to 
String, so that would have to change).

> DelegationTokenAuthenticationHandler writes via closed writer
> -
>
> Key: HADOOP-13346
> URL: https://issues.apache.org/jira/browse/HADOOP-13346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-13346.patch
>
>
> By default, jackson's ObjectMapper closes the writer after writing, so in the 
> following code
> {code}
> ObjectMapper jsonMapper = new ObjectMapper();
> jsonMapper.writeValue(writer, map);
> writer.write(ENTER);
> {code}
> (https://github.com/apache/hadoop/blob/8a9d293dd60f6d51e1574e412d40746ba8175fe1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L280-L282)
> writer.write actually writes to a closed stream.  This doesn't seem to cause 
> a problem with the version of jetty that hadoop uses (those just ignore 
> closes), but causes problems on later verisons of jetty -- I hit this on 
> jetty 8 while implementing SOLR-9200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13346) DelegationTokenAuthenticationHandler writes via closed writer

2016-07-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-13346:

Status: Patch Available  (was: Open)

> DelegationTokenAuthenticationHandler writes via closed writer
> -
>
> Key: HADOOP-13346
> URL: https://issues.apache.org/jira/browse/HADOOP-13346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-13346.patch
>
>
> By default, jackson's ObjectMapper closes the writer after writing, so in the 
> following code
> {code}
> ObjectMapper jsonMapper = new ObjectMapper();
> jsonMapper.writeValue(writer, map);
> writer.write(ENTER);
> {code}
> (https://github.com/apache/hadoop/blob/8a9d293dd60f6d51e1574e412d40746ba8175fe1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L280-L282)
> writer.write actually writes to a closed stream.  This doesn't seem to cause 
> a problem with the version of jetty that hadoop uses (those just ignore 
> closes), but causes problems on later verisons of jetty -- I hit this on 
> jetty 8 while implementing SOLR-9200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13346) DelegationTokenAuthenticationHandler writes via closed writer

2016-07-06 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-13346:
---

 Summary: DelegationTokenAuthenticationHandler writes via closed 
writer
 Key: HADOOP-13346
 URL: https://issues.apache.org/jira/browse/HADOOP-13346
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Gregory Chanan
Priority: Minor


By default, jackson's ObjectMapper closes the writer after writing, so in the 
following code
{code}
ObjectMapper jsonMapper = new ObjectMapper();
jsonMapper.writeValue(writer, map);
writer.write(ENTER);
{code}
(https://github.com/apache/hadoop/blob/8a9d293dd60f6d51e1574e412d40746ba8175fe1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L280-L282)

writer.write actually writes to a closed stream.  This doesn't seem to cause a 
problem with the version of jetty that hadoop uses (those just ignore closes), 
but causes problems on later verisons of jetty -- I hit this on jetty 8 while 
implementing SOLR-9200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-23 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15159460#comment-15159460
 ] 

Gregory Chanan commented on HADOOP-12829:
-

bq. One small thing: in the future, please use different names for different 
patches (many people give them numbers).

Sure thing.  Every project is different (solr prefers patches to have the same 
name, except if it is substantially different, e.g. for a different branch).

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-12829.patch, HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Attachment: HADOOP-12829.patch

Updated patch for Colin's comments.

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-12829.patch, HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157866#comment-15157866
 ] 

Gregory Chanan commented on HADOOP-12829:
-

Thanks for taking a look [~cmccabe], will update the patch.

Also agree with sjlee, lowering the priority to minor.

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-22 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Priority: Minor  (was: Major)

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-19 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155305#comment-15155305
 ] 

Gregory Chanan commented on HADOOP-12829:
-

Attached a patch.  Doesn't include any tests -- not sure exactly what to test.  
I internally tested by changing STATS_DATA_CLEANER to package-private and 
writing the following test:

{code}
  @Test
  public void testShutdown() throws Exception {
FileSystem.Statistics.STATS_DATA_CLEANER.interrupt();
FileSystem.Statistics.STATS_DATA_CLEANER.join();
  }
{code}

which passes with the change and hangs without it.  I'm unclear on if hadoop 
even wants something like this, since I'm not up to speed on how hadoop handles 
JVM reuse for unit tests.

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Attachment: HADOOP-12829.patch

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Status: Patch Available  (was: Open)

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4, 2.8.0, 2.7.3
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: HADOOP-12829.patch
>
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12829) StatisticsDataReferenceCleaner swallos interrupt exceptions

2016-02-19 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-12829:
---

 Summary: StatisticsDataReferenceCleaner swallos interrupt 
exceptions
 Key: HADOOP-12829
 URL: https://issues.apache.org/jira/browse/HADOOP-12829
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.6.4, 2.8.0, 2.7.3
Reporter: Gregory Chanan
Assignee: Gregory Chanan


The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers on 
our test code, which passed before this change and fails after it.  Here's a 
sample report:

{code}
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestSecureReplicationHandler: 
   1) Thread[id=16, 
name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner, 
state=WAITING, group=TGRP-TestSecureReplicationHandler]
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at 
org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
at java.lang.Thread.run(Thread.java:745)
{code}

And here's an indication that the interrupt is being ignored:
{code}
25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
exception in the cleaner thread but it will continue to run 
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at 
org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
at java.lang.Thread.run(Thread.java:745)
{code}

This is inconsistent with how other long-running threads in hadoop, i.e. 
PeerCache respond to being interrupted.

The argument for doing this in HADOOP-12107 is given as 
(https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
{quote}
Cleaner#run
Catch and log InterruptedException in the while loop, such that thread does not 
die on a spurious wakeup. It's safe since it's a daemon thread.
{quote}

I'm unclear on what "spurious wakeup" means and it is not mentioned in 
https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
{quote}
A thread sends an interrupt by invoking interrupt on the Thread object for the 
thread to be interrupted. For the interrupt mechanism to work correctly, the 
interrupted thread must support its own interruption.
{quote}

So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12829) StatisticsDataReferenceCleaner swallows interrupt exceptions

2016-02-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-12829:

Summary: StatisticsDataReferenceCleaner swallows interrupt exceptions  
(was: StatisticsDataReferenceCleaner swallos interrupt exceptions)

> StatisticsDataReferenceCleaner swallows interrupt exceptions
> 
>
> Key: HADOOP-12829
> URL: https://issues.apache.org/jira/browse/HADOOP-12829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> The StatisticsDataReferenceCleaner, implemented in HADOOP-12107 swallows 
> interrupt exceptions.  Over in Solr/Sentry land, we run thread leak checkers 
> on our test code, which passed before this change and fails after it.  Here's 
> a sample report:
> {code}
> 1 thread leaked from SUITE scope at 
> org.apache.solr.handler.TestSecureReplicationHandler: 
>1) Thread[id=16, 
> name=org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,
>  state=WAITING, group=TGRP-TestSecureReplicationHandler]
> at java.lang.Object.wait(Native Method)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
> at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And here's an indication that the interrupt is being ignored:
> {code}
> 25209 T16 oahf.FileSystem$Statistics$StatisticsDataReferenceCleaner.run WARN 
> exception in the cleaner thread but it will continue to run 
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
>   at 
> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3040)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> This is inconsistent with how other long-running threads in hadoop, i.e. 
> PeerCache respond to being interrupted.
> The argument for doing this in HADOOP-12107 is given as 
> (https://issues.apache.org/jira/browse/HADOOP-12107?focusedCommentId=14598397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14598397):
> {quote}
> Cleaner#run
> Catch and log InterruptedException in the while loop, such that thread does 
> not die on a spurious wakeup. It's safe since it's a daemon thread.
> {quote}
> I'm unclear on what "spurious wakeup" means and it is not mentioned in 
> https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
> {quote}
> A thread sends an interrupt by invoking interrupt on the Thread object for 
> the thread to be interrupted. For the interrupt mechanism to work correctly, 
> the interrupted thread must support its own interruption.
> {quote}
> So, I believe this thread should respect interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11973) Ensure ZkDelegationTokenSecretManager namespace znodes get created with ACLs

2015-05-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11973:

Attachment: HADOOP-11973v3.patch

 Ensure ZkDelegationTokenSecretManager namespace znodes get created with ACLs
 

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 2.7.1

 Attachments: HADOOP-11973.patch, HADOOP-11973v2.patch, 
 HADOOP-11973v3.patch


 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-18 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11973:

Status: Patch Available  (was: Open)

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11973.patch


 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-18 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11973:

Attachment: HADOOP-11973.patch

Here's a patch that addresses the issue and has a test.

Here's a description I wrote in CURATOR-221:
{quote}
Yes, although in my case it's a bit complicated. If you look at HADOOP-11973, 
to keep the external vs internal client impl similar, I want to initialize the 
final CuratorFramework object in the constructor, which means I want to use the 
namespace-aware version. So, I could create the nodes before I call 
usingNamespace, but then I have to deal with exception handling, which I don't 
want to do in the constructor. So essentially I have to do:

call usingNamespace(ns) in the constructor
in startThreads, call usingNamespace(null) and then create the parents 
manually.
{quote}

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11973.patch


 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-18 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan reassigned HADOOP-11973:
---

Assignee: Gregory Chanan

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan

 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-18 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549272#comment-14549272
 ] 

Gregory Chanan commented on HADOOP-11973:
-

looks like the underlying issue is CURATOR-221, I'm investigating if there is a 
workaround we can use.

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan

 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-18 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11973:

Attachment: HADOOP-11973v2.patch

Fix whitespace/style and unset thread local at end of test so other tests are 
not affected.

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11973.patch, HADOOP-11973v2.patch


 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-14 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544752#comment-14544752
 ] 

Gregory Chanan commented on HADOOP-11973:
-

any ideas [~asuresh]?

 Some ZkDelegationTokenSecretManager znodes do not have ACLs
 ---

 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan

 I recently added an ACL Provider to the curator framework instance I pass to 
 the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.
 I set: zk-dt-secret-manager.znodeWorkingPath to:
 solr/zkdtsm
 and notice that
 /solr/zkdtsm/
 /solr/zkdtsm/ZKDTSMRoot
 do not have ACLs
 but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
 logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
 /solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
 ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11973) Some ZkDelegationTokenSecretManager znodes do not have ACLs

2015-05-14 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11973:
---

 Summary: Some ZkDelegationTokenSecretManager znodes do not have 
ACLs
 Key: HADOOP-11973
 URL: https://issues.apache.org/jira/browse/HADOOP-11973
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan


I recently added an ACL Provider to the curator framework instance I pass to 
the ZkDelegationTokenSecretManager, and notice some strangeness around ACLs.

I set: zk-dt-secret-manager.znodeWorkingPath to:
solr/zkdtsm

and notice that
/solr/zkdtsm/
/solr/zkdtsm/ZKDTSMRoot
do not have ACLs

but all the znodes under /solr/zkdtsm/ZKDTSMRoot have ACLs.  From adding some 
logging, it looks like the ACLProvider is never called for /solr/zkdtsm and 
/solr/zkdtsm/ZKDTSMRoot.  I don't know if that's a Curator or 
ZkDelegationTokenSecretManager issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11443) hadoop.auth cookie has invalid Expires if used with non-US default Locale

2014-12-22 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11443:
---

 Summary: hadoop.auth cookie has invalid Expires if used with 
non-US default Locale
 Key: HADOOP-11443
 URL: https://issues.apache.org/jira/browse/HADOOP-11443
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


The netscape cookie spec (http://curl.haxx.se/rfc/cookie_spec.html) does not 
specify the language of the EXPIRES attribute:

{code}
 The date string is formatted as:

Wdy, DD-Mon- HH:MM:SS GMT

This is based on RFC 822, RFC 850, RFC 1036, and RFC 1123, with the variations 
that the only legal time zone is GMT and the separators between the elements of 
the date must be dashes. 
{code}

But RFC 822, lists the months as:
{code}
 month   =  Jan  /  Feb /  Mar  /  Apr
 /  May  /  Jun /  Jul  /  Aug
 /  Sep  /  Oct /  Nov  /  Dec
{code}

and some clients (i.e. httpclient) do not recognize Expires in other languages, 
so it's best to just use US English (which is the only Locale guaranteed to be 
supported by the jvm, see 
http://www.oracle.com/technetwork/articles/javase/locale-140624.html).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11443) hadoop.auth cookie has invalid Expires if used with non-US default Locale

2014-12-22 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11443:

Attachment: HADOOP-11443.patch

Here's a patch that generates the date using the US locale and a test that sets 
a different locale and verifies the cookie still works with httpclient.

 hadoop.auth cookie has invalid Expires if used with non-US default Locale
 -

 Key: HADOOP-11443
 URL: https://issues.apache.org/jira/browse/HADOOP-11443
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11443.patch


 The netscape cookie spec (http://curl.haxx.se/rfc/cookie_spec.html) does not 
 specify the language of the EXPIRES attribute:
 {code}
  The date string is formatted as:
 Wdy, DD-Mon- HH:MM:SS GMT
 This is based on RFC 822, RFC 850, RFC 1036, and RFC 1123, with the 
 variations that the only legal time zone is GMT and the separators between 
 the elements of the date must be dashes. 
 {code}
 But RFC 822, lists the months as:
 {code}
  month   =  Jan  /  Feb /  Mar  /  Apr
  /  May  /  Jun /  Jul  /  Aug
  /  Sep  /  Oct /  Nov  /  Dec
 {code}
 and some clients (i.e. httpclient) do not recognize Expires in other 
 languages, so it's best to just use US English (which is the only Locale 
 guaranteed to be supported by the jvm, see 
 http://www.oracle.com/technetwork/articles/javase/locale-140624.html).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-11-06 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14201174#comment-14201174
 ] 

Gregory Chanan commented on HADOOP-11157:
-

patch lgtm.  One note, not really related to this JIRA, is it doesn't look like 
the tokenCache and keyCache actually need to have cacheData=true.  Looks like 
they are just being used for notification.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Arun Suresh
 Attachments: HADOOP-11157.2.patch, HADOOP-11157.3.patch, 
 HADOOP-11157.4.patch, HADOOP-11157.5.patch, HADOOP-11157.6.patch, 
 HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-11-04 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14195946#comment-14195946
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Hmm, I put all the test methods in a loop that ran 100 times and 
testMultiNodeOperations failed (others might as well, but that one failed 
first).

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Arun Suresh
 Attachments: HADOOP-11157.2.patch, HADOOP-11157.3.patch, 
 HADOOP-11157.4.patch, HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-11-04 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196508#comment-14196508
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Looks like it's only the one test (I increased the timeout so all the tests 
would run):

{code}
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2,628.821 sec 
 FAILURE! - in 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
testMultiNodeOperations(org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager)
  Time elapsed: 7.557 sec   FAILURE!
java.lang.AssertionError: Expected InvalidToken
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager.testMultiNodeOperations(TestZKDelegationTokenSecretManager.java:97)


Results :

Failed tests: 
  TestZKDelegationTokenSecretManager.testMultiNodeOperations:97 Expected 
InvalidToken

Tests run: 5, Failures: 1, Errors: 0, Skipped: 0
{code}

Note, the line number is off because I modified the test to run in loops.  It's 
this line:
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java#L90

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Arun Suresh
 Attachments: HADOOP-11157.2.patch, HADOOP-11157.3.patch, 
 HADOOP-11157.4.patch, HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-11-04 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196509#comment-14196509
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Sorry, permanent link: 
https://github.com/apache/hadoop/blob/2bb327eb939f57626d3dac10f7016ed634375d94/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestZKDelegationTokenSecretManager.java#L90

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Arun Suresh
 Attachments: HADOOP-11157.2.patch, HADOOP-11157.3.patch, 
 HADOOP-11157.4.patch, HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14191152#comment-14191152
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Some notes:
{code}
private void processTokenAddOrUpdate(ChildData data) throws IOException {
Stat stat = null;
try {
  stat = zkClient.checkExists().forPath(data.getPath());
} catch (Exception e) {
  LOG.warn(Could not get path for Token Add/Update notification.. going to 
update !!, e);
  stat = null;
}
{code}
I don't think setting stat to null is necessary and I don't understand the 
warning -- aren't you not going to do anything in the rest of the function 
because stat is null anyway?

{code}
+  stat = zkClient.checkExists().forPath(data.getPath());
+} catch (Exception e) {
+  LOG.warn(Could not get path for Token Delete notification.. going to 
delete from localcache !!, e);
+  stat = null;
+}
{code}
Again, setting stat to null doesn't seem necessary.

{code}
+// Check if Token has already been cancelled..
+if (stat == null) {
{code}
Here, and in the opposite case where stat == null on the add/remove, don't we 
want to handle those cases?  We aren't guaranteed to see every notification 
(http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches). 
 Should we just have one handle function where you run the logic based on the 
current state?

- It would be nice if there were a test for starting a secret manager after a 
delegation token on another secret manager has already been created, and 
verifying it works.  Also, the same case but shutting down and restarting a 
secret manager and verifying the tokens (for itself or for others still works).

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Arun Suresh
 Attachments: HADOOP-11157.2.patch, HADOOP-11157.patch, 
 HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-10-29 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188065#comment-14188065
 ] 

Gregory Chanan commented on HADOOP-10911:
-

[~cnauroth] do you have a test?  Ideally we'd have a format that worked for as 
many users as possible.

Also, can you check out HADOOP-11068?  I changed the format there to what's 
output by the latest jetty but it hasn't been committed.  Perhaps that works 
for you?

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch, oozie-webconsole.stream


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-28 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187567#comment-14187567
 ] 

Gregory Chanan commented on HADOOP-11157:
-

[~kkambatl] while writing up a test as you requested, I found a number of other 
issues.  This will be kind of scatter-brained, sorry:

1) related to shutdown
- a) the ExpiredToken is shut down after the ZKDelegationTokenSecretManager's 
curator, which causes an exception to be thrown and the process to exit.  This 
can be addressed by shutting down the ExpiredToken thread before the curator.
- b) even with a), the ExpiredTokenThread is interrupted by 
AbstractDelegationTokenSecretManager.closeThreads...if the ExpiredTokenThread 
is currently rolling the master key or expiring tokens in ZK, the interruption 
will cause the process to exit.  It seems like this can be addressed by holding 
the noInterruptsLock while the ExpiredTokenThread is not sleeping (should be 
waiting), but I'm not sure if we want to go that route.  Perhaps alternatively 
we could deal with the interruption by checking if its expected (i.e. if 
running is false).  One issue is that approach is that the 
ZKDelegationTokenSecretManager functions called from the ExpiredTokenThread 
don't throw or keep the interrupt flag, they just catch the exceptions and 
possibly throw them as a runtime exception.  I'm not sure if we can just 
swallow the InterruptedException -- presumably we need the ZK state to be in 
some reasonable state in case the process restarts?  Of course we have no tests 
of that...
2) not related to shutdown
- a) if you run TestZKDelegationTokenSecretManager#testCancelTokenSingleManager 
in a loop it will fail eventually.  It looks like the issue is how we deal with 
asynchronous ZK updates.
Consider the following code:
{code}
token = createToken
cancelToken(token)
verifyToken(token){code}
cancelToken will delete it from the local cache and delete the znode.  But the 
curator client will get the create child message (in the listener thread) and 
add the token back.  If that happens after cancelToken, the token will be added 
back until the listener thread gets the cancel message again.  (It also just 
occurred to me that this is happening in two different threads but some of the 
structures, like the currentToken, aren't thread safe).  The usual way to 
prevent this is to assign versions to the znodes so you can track whether you 
are getting an update for an old version.  I don't know how to deal with it in 
this case where deletes are a possibility and there doesn't appear to be a 
master that is responsible for writing (i.e. what is preventing some other 
SecretManager from recreating the token just after delete -- how would versions 
help with that?).  This may affect the keyCache as well as the tokenCache.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-28 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187594#comment-14187594
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Here's another issue I think could happen, but have no test for:
1) set up two SecretManagers sharing zk
2) get a delegation token from one
3) use on both
4) renew on one around token expiration time

Then, both SecretManagers will run the token expiration code and possibly 
expire the newly renewed token.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14185623#comment-14185623
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Test failure looks unrelated, passes locally for me.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-23 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11157:

Attachment: HADOOP-11157.patch

Kicking off the job again.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11167) ZKDelegationTokenSecretManager doesn't always handle zk node existing correctly

2014-10-06 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11167:
---

 Summary: ZKDelegationTokenSecretManager doesn't always handle zk 
node existing correctly
 Key: HADOOP-11167
 URL: https://issues.apache.org/jira/browse/HADOOP-11167
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


The ZKDelegationTokenSecretManager is inconsistent in how it handles curator 
checkExists calls.  Sometimes it assumes null response means the node exists, 
sometimes it doesn't.  This causes it to be buggy in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11167) ZKDelegationTokenSecretManager doesn't always handle zk node existing correctly

2014-10-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11167:

Attachment: HADOOP-11167.patch

Here's a patch that fixes the issue.  I'll work on a test now.

 ZKDelegationTokenSecretManager doesn't always handle zk node existing 
 correctly
 ---

 Key: HADOOP-11167
 URL: https://issues.apache.org/jira/browse/HADOOP-11167
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11167.patch


 The ZKDelegationTokenSecretManager is inconsistent in how it handles curator 
 checkExists calls.  Sometimes it assumes null response means the node exists, 
 sometimes it doesn't.  This causes it to be buggy in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11167) ZKDelegationTokenSecretManager doesn't always handle zk node existing correctly

2014-10-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11167:

Attachment: HADOOP-11167.patch

Here is a patch that includes a couple of tests:
- renew token test that tests setData is used instead of create with ZK
- cancel token and verify, which requires that the znode was actually deleted

new tests fail without the code changes, pass with them.

 ZKDelegationTokenSecretManager doesn't always handle zk node existing 
 correctly
 ---

 Key: HADOOP-11167
 URL: https://issues.apache.org/jira/browse/HADOOP-11167
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11167.patch, HADOOP-11167.patch


 The ZKDelegationTokenSecretManager is inconsistent in how it handles curator 
 checkExists calls.  Sometimes it assumes null response means the node exists, 
 sometimes it doesn't.  This causes it to be buggy in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-09-30 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11157:
---

 Summary: ZKDelegationTokenSecretManager never shuts down 
listenerThreadPool
 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
running into this issue.  The solr unit tests look for leaked threads and when 
I started using the ZKDelegationTokenSecretManager it started reporting leaks.  
Shuting down the listenerThreadPool after the objects that use it resolves the 
leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-09-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11157:

Attachment: HADOOP-11157.patch

Here's a patch that addresses the issue.  I moved the listenerThreadPool to 
follow the same lifetime as the keyCache/tokenCache.  I'm not sure how to add a 
testcase because it is a thread leak; are there any existing tests that look 
for thread leaks?

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-09-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11157:

Status: Patch Available  (was: Open)

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11087) cancel delegation token succeeds if actual token is a substring of passed token

2014-09-11 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11087:
---

 Summary: cancel delegation token succeeds if actual token is a 
substring of passed token
 Key: HADOOP-11087
 URL: https://issues.apache.org/jira/browse/HADOOP-11087
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan


I'm using the DelegationTokenAuthenticationFilter.  If I get token via 
op=GETDELEGATIONTOKEN and pass tokenBOGUS via op=CANCELDELEGATIONTOKEN, the 
token is successfully cancelled.  It looks like this is because 
Token.readFields knows the lengths of the token and just crops it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11068) Match hadoop.auth cookie format to jetty output

2014-09-09 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11068:

Status: Patch Available  (was: Open)

 Match hadoop.auth cookie format to jetty output
 ---

 Key: HADOOP-11068
 URL: https://issues.apache.org/jira/browse/HADOOP-11068
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11068.patch


 See: 
 https://issues.apache.org/jira/browse/HADOOP-10911?focusedCommentId=14111626page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14111626
 I posted the cookie format that jetty generates, but I attached a version of 
 the patch with an older format.  Note, because the tests are pretty 
 comprehensive, this cookie format works (it fixes the issue we were having 
 with Solr), but it would probably be better to match the format that jetty 
 generates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-08 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11077:
---

 Summary: NPE if hosts not specified in ProxyUsers
 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
specify the hosts for a user/groups proxy user and then try to authenticate, I 
get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11077:

Status: Patch Available  (was: Open)

 NPE if hosts not specified in ProxyUsers
 

 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11077.patch


 When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
 specify the hosts for a user/groups proxy user and then try to authenticate, 
 I get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11077:

Attachment: HADOOP-11077.patch

Here's a simple patch against trunk including a test.

 NPE if hosts not specified in ProxyUsers
 

 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11077.patch


 When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
 specify the hosts for a user/groups proxy user and then try to authenticate, 
 I get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11068) Match hadoop.auth cookie format to jetty output

2014-09-05 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-11068:
---

 Summary: Match hadoop.auth cookie format to jetty output
 Key: HADOOP-11068
 URL: https://issues.apache.org/jira/browse/HADOOP-11068
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan


See: 
https://issues.apache.org/jira/browse/HADOOP-10911?focusedCommentId=14111626page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14111626

I posted the cookie format that jetty generates, but I attached a version of 
the patch with an older format.  Note, because the tests are pretty 
comprehensive, this cookie format works (it fixes the issue we were having with 
Solr), but it would probably be better to match the format that jetty generates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11068) Match hadoop.auth cookie format to jetty output

2014-09-05 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-11068:

Attachment: HADOOP-11068.patch

Here's a patch that matches the jetty format.  I've written it assuming 
HADOOP-10911 is committed, though I don't see it in the source repo, at least 
on github.  Can you take a look [~tucu00]?

 Match hadoop.auth cookie format to jetty output
 ---

 Key: HADOOP-11068
 URL: https://issues.apache.org/jira/browse/HADOOP-11068
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11068.patch


 See: 
 https://issues.apache.org/jira/browse/HADOOP-10911?focusedCommentId=14111626page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14111626
 I posted the cookie format that jetty generates, but I attached a version of 
 the patch with an older format.  Note, because the tests are pretty 
 comprehensive, this cookie format works (it fixes the issue we were having 
 with Solr), but it would probably be better to match the format that jetty 
 generates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-26 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:


Attachment: HADOOP-10911v3.patch

Latest version of the patch.

This patch has the following changes from the previous patch:
- Adds a check that a cookie with a proper token is included with each POST 
request.  This guarantees that the cookies are being handled correctly (i.e. 
the test can't pass by just redoing negotiate on each request).  The previous 
patch had a test for this with httpclient, but not with AuthenticatedURL.
- Changes the cookie format.  Note with the new tests, we are checking {jetty, 
tomcat} x {AuthenticatedURL, HttpClient}.  So we should be pretty confident in 
any cookie that passes all those tests.  To generate the cookie format, I used 
jetty 8.1.15.v20140411.  (NOTE: I didn't actually use the Cookie class and the 
3.0 servlet API, I called the helper function jetty uses to produce the format: 
http://archive.eclipse.org/jetty/9.0.0.RC0/apidocs/org/eclipse/jetty/http/HttpFields.html#addSetCookie%28java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String,%20long,%20java.lang.String,%20boolean,%20boolean,%20int%29
 ).  It produced a cookie like:
{code}
Set-Cookie=hadoop.auth=u=clientp=cli...@example.comt=kerberose=1409128342379s=R6rNnd4CcMV0bNtK1dNLiJr1ivk=;Expires=Mon,
 31-Aug-2026 08:36:23 GMT;HttpOnly
{code}
This is if you set to use version 0 cookies, version 1 is identical with a 
Max-Age entry added:
{code}
Set-Cookie=hadoop.auth=u=clientp=cli...@example.comt=kerberose=1409128342379s=R6rNnd4CcMV0bNtK1dNLiJr1ivk=;Expires=Mon,
 31-Aug-2026 08:36:23 GMT;Max-Age=379069291;HttpOnly
{code}
The Max-Age entry doesn't seem necessary, given even the JDK docs say version 1 
is experimental.  Note that I kept the spaces after the ; in our format, as 
they don't seem to affect anything and make it easier to read.  Also, I tested 
the new format against Solr on real cluster and it passed, while the old format 
fails.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:


Attachment: HADOOP-10911v2.patch

bq. On Max-Age  Expired, i don't think we want to break old browsers. It seems 
to me an HttpClient bug that uses the presence of Expire to go back to old 
cookie format, the precense of Version=1 should trump. Can you dig on 
HttpClient side?

This is a bit complicated -- see the discussion here: 
http://mail-archives.apache.org/mod_mbox/hc-httpclient-users/201408.mbox/%3C1406895602.17749.8.camel%40ubuntu%3E
In short, it's not a valid Version=1 cookie, but httpclient would like to be 
able to handle it anyway, see HTTPCLIENT-1546.

I added a patch that does the following:
1) Runs the TestKerberosAuthenticator test cases against Tomcat as well as 
Jetty, this exposes the bug in HADOOP-10379, which didn't get a test added in 
HADOOP-10710
2) Adds an httpclient test case to TestKerberosAuthenticator.  This does 2 
things:
- Checks that the cookie is actually being processed.  Note that it's possible 
for the existing tests to pass by doing the SPNego negotiation on each request, 
rather than relying on the cookie.  But the entity type we use in the test 
doesn't support repeating, so an exception is raised if the SPNego process 
repeats
- Verifies that httpclient works with our cookie format (probably not strictly 
necessary, but nice to have given httpclient's popularity)

So, I think the the test cases are pretty useful for catching regressions.

As for the format itself, I just chose a simple format that passes all the 
tests.  That seems like a reasonable improvement over what we have now, but I'm 
not married to the particular format.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-04 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085540#comment-14085540
 ] 

Gregory Chanan commented on HADOOP-10911:
-

I tried the tests with the format used in HADOOP-10379 and they seem to pass.  
Let me see if I can come up with a test that fails for both those and 
HADOOP-10710.  I think only then would I be confident in our cookie format.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-31 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:


Attachment: HADOOP-10911-tests.patch

Here are a couple of test cases.  You'll notice they fail with the current 
cookie format and pass without the quotes.

I'm still investigating httpclient's handling of the cookies, as we discussed 
above.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-31 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081656#comment-14081656
 ] 

Gregory Chanan commented on HADOOP-10911:
-

Did a little more digging.  Httpclient is indeed acting like I thought: if 
there is expires it is parsed as a netscape cookie, even if there is a 
version field.  I'll check with httpclient why that is.  By the way, even if it 
was parsed as an RFC2109 cookie, it wouldn't parse correctly because the 
Expires value has non-quoted spaces.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-31 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081682#comment-14081682
 ] 

Gregory Chanan commented on HADOOP-10911:
-

Sent an e-mail to the httpclient list.  FWIW, I think we should just remove the 
quotes, as in my original patch.  It's a valid cookie, at least according to 
the netscape draft spec, and it works with current versions of httpclient.  
That's good because it won't break existing clients and httpclient is popular 
enough to warrant consideration.  If they consider the above a bug and fix it 
in a later version, we can upgrade the required httpclient version and have a 
proper RFC2109 cookie format.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-31 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081684#comment-14081684
 ] 

Gregory Chanan commented on HADOOP-10911:
-

Thoughts [~tucu00]?

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-30 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-10911:
---

 Summary: hadoop.auth cookie after HADOOP-10710 still not proper 
according to RFC2109
 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan


I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
unable to authenticate with servers running the authentication filter), even 
with HADOOP-10710 applied.

From my reading of the spec, the problem is as follows:
Expires is not a valid directive according to the RFC, though it is mentioned 
for backwards compatibility with netscape draft spec.  When httpclient sees 
Expires, it parses according to the netscape draft spec, but note from 
RFC2109:
{code}
Note that the Expires date format contains embedded spaces, and that old 
cookies did not have quotes around values. 
{code}
and note that AuthenticationFilter puts quotes around the value:
https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439

So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:


Attachment: HADOOP-10911.patch

Here's a trivial patch that removes the quotes.

Alternatively, we could get rid of Expires or use Max-Age.

Finally, since this has been broken multiple times perhaps we should add a test 
against httpclient directly?

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080217#comment-14080217
 ] 

Gregory Chanan commented on HADOOP-10911:
-

Thanks [~tucu00].

bq. Please don't remove the quotes.
Is this because the value can contain whitespace?

So you want to use Max-Age?  It seems it's not supported by older versions of 
IE (http://mrcoles.com/blog/cookies-max-age-vs-expires/) -- do we care about 
that?

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-07-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080313#comment-14080313
 ] 

Gregory Chanan commented on HADOOP-10911:
-

bq. on the quotes, we had them, they got removed, that broke things, we added 
again. They don't do any harm if they are there.

It's a little more complicated -- HADOOP-10379 made multiple changes like 
removing the quotes and the Version field.  So it was the combination of 
changes that broke things, not specifically removing the quotes.

bq. On Max-Age  Expired, i don't think we want to break old browsers. It seems 
to me an HttpClient bug that uses the presence of Expire to go back to old 
cookie format, the precense of Version=1 should trump. Can you dig on 
HttpClient side?

Seems reasonable, I'll dig.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-19 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14037899#comment-14037899
 ] 

Gregory Chanan commented on HADOOP-10710:
-

Does this not work?  I haven't looked closely: 
http://docs.oracle.com/javaee/6/api/javax/servlet/http/Cookie.html#setHttpOnly(boolean)

 hadoop.auth cookie is not properly constructed according to RFC2109
 ---

 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Juan Yu
 Attachments: HADOOP-10710.001.patch, HADOOP-10710.002.patch


 It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
 being constructed.
 Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} 
 class and corresponding {{HttpServletResponse}} methods. This was taking care 
 of setting attributes like 'Version=1' and double-quoting the cookie value if 
 necessary.
 HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
 setting values and attributes by hand. This is not taking care of setting 
 required attributes like Version and escaping the cookie value.
 While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
 breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
 access is broken since this change.
 It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
 Note this can be done using the {{Cookie}} API.
 We should revert the cookie creation logic to use the {{Cookie}} API and take 
 care of the security flag via {{setSecure(boolean)}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10193) hadoop-auth's PseudoAuthenticationHandler can consume getInputStream

2013-12-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10193:


Attachment: HADOOP-10193v2.patch

Good idea, here's a new patch.

 hadoop-auth's PseudoAuthenticationHandler can consume getInputStream
 

 Key: HADOOP-10193
 URL: https://issues.apache.org/jira/browse/HADOOP-10193
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-10193.patch, HADOOP-10193v2.patch


 I'm trying to use the AuthenticationFilter in front of Apache Solr.  The 
 issue I'm running into is that the PseudoAuthenticationHandler calls 
 ServletRequest.getParameter which affects future calls to 
 ServletRequest.getInputStream.  I.e. from the javadoc:
 {code}
 If the parameter data was sent in the request body, such as occurs with an 
 HTTP POST request, then reading the body directly via getInputStream() or 
 getReader() can interfere with the execution of this method. 
 {code}
 Solr calls getInputStream after the filter and errors result.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10193) hadoop-auth's PseudoAuthenticationHandler can consume getInputStream

2013-12-27 Thread Gregory Chanan (JIRA)
Gregory Chanan created HADOOP-10193:
---

 Summary: hadoop-auth's PseudoAuthenticationHandler can consume 
getInputStream
 Key: HADOOP-10193
 URL: https://issues.apache.org/jira/browse/HADOOP-10193
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 3.0.0


I'm trying to use the AuthenticationFilter in front of Apache Solr.  The issue 
I'm running into is that the PseudoAuthenticationHandler calls 
ServletRequest.getParameter which affects future calls to 
ServletRequest.getInputStream.  I.e. from the javadoc:

{code}
If the parameter data was sent in the request body, such as occurs with an HTTP 
POST request, then reading the body directly via getInputStream() or 
getReader() can interfere with the execution of this method. 
{code}

Solr calls getInputStream after the filter and errors result.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10193) hadoop-auth's PseudoAuthenticationHandler can consume getInputStream

2013-12-27 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10193:


Attachment: HADOOP-10193.patch

Here's a patch that parses the query string instead of calling getParameter.  I 
used org.apache.http.client.utils.URLEncodedUtils to parse the query string.

 hadoop-auth's PseudoAuthenticationHandler can consume getInputStream
 

 Key: HADOOP-10193
 URL: https://issues.apache.org/jira/browse/HADOOP-10193
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-10193.patch


 I'm trying to use the AuthenticationFilter in front of Apache Solr.  The 
 issue I'm running into is that the PseudoAuthenticationHandler calls 
 ServletRequest.getParameter which affects future calls to 
 ServletRequest.getInputStream.  I.e. from the javadoc:
 {code}
 If the parameter data was sent in the request body, such as occurs with an 
 HTTP POST request, then reading the body directly via getInputStream() or 
 getReader() can interfere with the execution of this method. 
 {code}
 Solr calls getInputStream after the filter and errors result.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10193) hadoop-auth's PseudoAuthenticationHandler can consume getInputStream

2013-12-27 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10193:


Status: Patch Available  (was: Open)

 hadoop-auth's PseudoAuthenticationHandler can consume getInputStream
 

 Key: HADOOP-10193
 URL: https://issues.apache.org/jira/browse/HADOOP-10193
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-10193.patch


 I'm trying to use the AuthenticationFilter in front of Apache Solr.  The 
 issue I'm running into is that the PseudoAuthenticationHandler calls 
 ServletRequest.getParameter which affects future calls to 
 ServletRequest.getInputStream.  I.e. from the javadoc:
 {code}
 If the parameter data was sent in the request body, such as occurs with an 
 HTTP POST request, then reading the body directly via getInputStream() or 
 getReader() can interfere with the execution of this method. 
 {code}
 Solr calls getInputStream after the filter and errors result.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)