[jira] [Resolved] (HADOOP-18365) Updated addresses are still accessed using the old IP address

2022-08-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HADOOP-18365.

Hadoop Flags: Reviewed
  Resolution: Fixed

PR merged to branch-3.3 and to trunk. Resolving. Thanks for the contribution 
[~svaughan] 

> Updated addresses are still accessed using the old IP address
> -
>
> Key: HADOOP-18365
> URL: https://issues.apache.org/jira/browse/HADOOP-18365
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.9
> Environment: Demonstrated in a Kubernetes environment running Java 11.
>Reporter: Steve Vaughan
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When the IPC Client recognizes that an IP address has changed, it updates the 
> server field and logs a message:
> Address change detected. Old: 
> journalnode-1.journalnode.hdfs.svc.cluster.local/10.1.0.178:8485 New: 
> journalnode-1.journalnode.hdfs.svc.cluster.local/10.1.0.182:8485
> Although the change is detected, the client will continue to connect to the 
> old IP address, resulting in repeated log messages.  This is seen in managed 
> environments when JournalNode syncing is enabled and a JournalNode is 
> restarted, with the remaining nodes in the set repeatedly logging this 
> message when syncing to the restarted JournalNode.
> The source of the problem is that the remoteId.address is not updated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18228) Update hadoop-vote to use HADOOP_RC_VERSION dir

2022-05-16 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HADOOP-18228.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update hadoop-vote to use HADOOP_RC_VERSION dir
> ---
>
> Key: HADOOP-18228
> URL: https://issues.apache.org/jira/browse/HADOOP-18228
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The recent changes in release script requires a minor change in hadoop-vote 
> to use Hadoop RC version dir before verifying signature and checksum of 
> .tar.gz files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17725) Improve error message for token providers in ABFS

2022-03-02 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-17725:
---
Fix Version/s: 3.3.3
   (was: 3.3.2)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan Sadikov
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.3
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17665) Ignore missing keystore configuration in reloading mechanism

2021-05-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17345908#comment-17345908
 ] 

Michael Stack commented on HADOOP-17665:


Thank you [~weichiu]

> Ignore missing keystore configuration in reloading mechanism 
> -
>
> Key: HADOOP-17665
> URL: https://issues.apache.org/jira/browse/HADOOP-17665
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Borislav Iordanov
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When there is no configuration of keystore/truststore location, the reload 
> mechanism should be disabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-05-10 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342149#comment-17342149
 ] 

Michael Stack commented on HADOOP-16524:


I merged the sub-task PR to trunk and branch-3.3. Where else should it go do 
you know [~ayushtkn]? Thanks.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17665) Ignore missing keystore configuration in reloading mechanism

2021-05-10 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342141#comment-17342141
 ] 

Michael Stack commented on HADOOP-17665:


Merged to trunk and branch-3.3. It doesn't apply to branch-3   Leaving open 
till figure if that is expected.

> Ignore missing keystore configuration in reloading mechanism 
> -
>
> Key: HADOOP-17665
> URL: https://issues.apache.org/jira/browse/HADOOP-17665
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Borislav Iordanov
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When there is no configuration of keystore/truststore location, the reload 
> mechanism should be disabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-05-04 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17339100#comment-17339100
 ] 

Michael Stack commented on HADOOP-16524:


Thanks [~borislav.iordanov]. Is that ok by you [~ayushtkn]? Otherwise I'll 
revert. Thanks.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-05-04 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17339022#comment-17339022
 ] 

Michael Stack commented on HADOOP-16524:


Thanks for following up [~ayushtkn]. Lets see if Boris has any come back, else 
I'll revert.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-04-21 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326692#comment-17326692
 ] 

Michael Stack commented on HADOOP-16524:


bq. What I could do is create a PR with a more helpful error that says "No 
keystore configured for HTTPS" or some such?

Make a sub-issue here [~borislav.iordanov] ?

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-04-13 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320580#comment-17320580
 ] 

Michael Stack commented on HADOOP-16524:


ACK [~ayushtkn]  Thanks for ping.  Looking...

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-03-31 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HADOOP-16524.

Resolution: Fixed

Resolving again. Thanks for the feature [~borislav.iordanov] contrib.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-03-31 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312594#comment-17312594
 ] 

Michael Stack commented on HADOOP-16524:


Pushed new PR that fixes yarn issue to branch-3.3 and trunk (took a few 
attempts for me to get the commit message format right). Ran the PR a few times 
and got different flakies each time through: none seemed related. Please shout 
if we broke anything.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-01-11 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack reopened HADOOP-16524:


Reopening to revert change until fix failing yarn test.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-01-11 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17262785#comment-17262785
 ] 

Michael Stack commented on HADOOP-16524:


Reverted from trunk and branch-3.3.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-01-11 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17262777#comment-17262777
 ] 

Michael Stack commented on HADOOP-16524:


Sorry about that. Thanks for the ping. I applied because 'all' tests passed but 
looks like we need the yarn tests to run too.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-01-08 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16524:
---
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
 Release Note: 
Adds auto-reload of keystore.

Adds below new config (default 10 seconds):

 ssl.{0}.stores.reload.interval

The refresh interval used to check if either of the truststore or keystore 
certificate file has changed.
 Assignee: Borislav Iordanov  (was: Kihwal Lee)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged to trunk and branch-3.3. Thanks for the patch [~borislav.iordanov] (I 
added you as contributor and assigned you this issue).

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2021-01-08 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261467#comment-17261467
 ] 

Michael Stack commented on HADOOP-16524:


Merged to trunk. Put up #2609 backport to branch-3.3.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2020-12-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17246968#comment-17246968
 ] 

Michael Stack commented on HADOOP-16524:


Attached PR LGTM. Would appreciate someone else taking a looksee if they have a 
chance. Was looking to merge in a few days. Thanks.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-11-13 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231687#comment-17231687
 ] 

Michael Stack commented on HADOOP-17288:


+1 on backport. Will ease move to 3.3.x\+

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209740#comment-17209740
 ] 

Michael Stack commented on HADOOP-17288:


[~ayushtkn] talking out loud, 3.2.1 and 3.3.0 shipped w/ guava 27, right? This 
patch is for 3.4.0 and would revert the guava included by hadoop to guava 11 
from 27 (though it wil not use the guava 11 itself). A regression on lib 
version is unexpected. The argument for the revert is that it will make it 
easier on folks running hadoop versions older than 3.2.1/3.3.0 to migrate 
especially for downstreamers like Hive where guava 11 is deeply embedded. Do I 
have that right? If so, I agree w/ this rationale. I suggest that the revert 
needs to be broadcast – the dev list? -- since it such an unusual move. Thanks.

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-05 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208255#comment-17208255
 ] 

Michael Stack commented on HADOOP-17288:


Looking good [~ayushtkn] . You thinking that this patch would include the roll 
back to guava 11 in branch-3 ? Thanks.

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-01 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17205644#comment-17205644
 ] 

Michael Stack commented on HADOOP-17288:


{quote}Ideally if the downstream has to upgrade guava, then this patch has no 
meaning.
{quote}
+1
{quote}Then we might need to shade them as well? May be {{curator}} can be one 
of those.
{quote}
Yes. Unfortunately the tangles start to compound fast when the dependencies' 
dependency is also an hadoop dependency (and versions don't align). One thing 
to consider removing problem dependencies (like curator) if not heavily used. 
Thanks [~ayushtkn]

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-17288:
---
Fix Version/s: 3.4.0

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-01 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17205329#comment-17205329
 ] 

Michael Stack commented on HADOOP-17288:


{quote}but guava as of now would be still packaged as it is part of several 
transitive dependencies.
{quote}
Can you say more on the above? Transitively included by Hadoop because Hadoop 
dependencies pull it in or are you talking downstreamers that expect Hadoop to 
provide guava to them (transitively?)

I'm wondering about the downstreamers whose apps use guava 11 because thats 
what hadoop used until 3.3.0/3.2.1. They want to upgrade to 3.4. They'll have 
to do the work to upgrade to guava 27 because that is what 3.2.1/3.3.0 have 
even though you've done all this work here. Seems a shame?

(I set fix version as 3.4.0 – thanks).

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-09-30 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17205259#comment-17205259
 ] 

Michael Stack commented on HADOOP-17288:


[~ayushtkn] What is the approach here (big patch, being lazy). guava is in 
thirdparty and now you are going to move all guava references in all hadoop to 
use the thirdparty version and remove the current included guava completely? if 
so, sounds good to me (big, ugly, simple patch). What you thinking of targeting 
as fix version? 3.4.0? Thanks.

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2.10

2020-09-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193302#comment-17193302
 ] 

Michael Stack commented on HADOOP-17254:


Yeah, upgrade if you can... 1.2.x, 1.3.x are EOL'd.

> Upgrade hbase to 1.2.6.1 on branch-2.10
> ---
>
> Key: HADOOP-17254
> URL: https://issues.apache.org/jira/browse/HADOOP-17254
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-11-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HADOOP-16598.

Resolution: Fixed

Reclosing after fixing branch-2 and branch-2.9 commits (revert and then apply 
of appropriate patch rather than cherry-pick)

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.4, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 2.11.0
>
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-11-01 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack reopened HADOOP-16598:


Thanks for checking. Reopening while I fix.

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.4, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 2.11.0
>
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-31 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16598:
---
Fix Version/s: 2.11.0
   2.10.1
   3.2.2
   3.1.4
   2.9.3
   3.0.4
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed on branch-2.9+ Shout if I missed any. Thanks for the patch [~zhangduo]

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.4, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 2.11.0
>
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954173#comment-16954173
 ] 

Michael Stack commented on HADOOP-16598:


I tried the 3.2 patch by applying it, cleaning the world, moving protoc out of 
my PATH and then building. It got this far before it started looking for protoc:

{code}
[INFO] --- hadoop-maven-plugins:3.2.2-SNAPSHOT:protoc (compile-protoc) @ 
hadoop-yarn-api ---
[WARNING] [protoc, --version] failed: java.io.IOException: Cannot run program 
"protoc": error=2, No such file or directory
[ERROR] stdout
...
{code}

Does hadoop-yarn-api need the pom fix?

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16600:
---
Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Pushed to branch-3.1.

Resolving. Thanks for patch [~zhangduo] and to the reviewers.

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.1.4
>
> Attachments: HADOOP-16600-branch-3.1-v1.patch, 
> HADOOP-16600.branch-3.1.v1.patch
>
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-10-12 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950207#comment-16950207
 ] 

Michael Stack commented on HADOOP-16600:


+1

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.1.4
>
> Attachments: HADOOP-16600-branch-3.1-v1.patch, 
> HADOOP-16600.branch-3.1.v1.patch
>
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-03 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16598:
---
Attachment: HADOOP-16598-branch-2.9-v1.patch

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9.patch, 
> HADOOP-16598-branch-2.patch, HADOOP-16598-branch-3.1.patch, 
> HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-03 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944166#comment-16944166
 ] 

Michael Stack commented on HADOOP-16598:


Retry. Was going to commit this tomorrow unless objection. 

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-03 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16598:
---
Attachment: HADOOP-16598-branch-2-v1.patch

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-30 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941228#comment-16941228
 ] 

Michael Stack commented on HADOOP-13363:


bq. If we are passing protobuf types around, either as inputs or outputs. Then 
we cannot shade the artifacts. And so: we cannot update protobuf 
"transparently".
bq. but it is probably a symptom of a bigger problem -protobuf types in 
public APIs

If it helps, in hbase, we've been working to remove all mention of pb in API. A 
few instances remain still (though they'll be gone soon). For this small set of 
remainders, we use a different (unshaded) version of pb from that which we use 
internally. The internal usage relies on the relocated/shaded protobuf. In a 
few cases, we even have convertion that moves data/invocation-description from 
old pb to new (shaded) pb.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Anu Engineer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928795#comment-16928795
 ] 

stack commented on HADOOP-13363:


One thought, just use the hbase-thirdparty jar? Shaded protobuf, netty, gson, 
and a few others. 

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928784#comment-16928784
 ] 

stack commented on HADOOP-13363:


On your #1 and #2 choices above, #1 works for us. Cost is negligible (caveat 
initial setup). The separate repo is forgotten till comes time to spin up new 
release. On #2, the submodule would be hard to 'explain' being in-line w/ 
hadoop checkout and there is too much stuff in hadoop repo as it is.

Would suggest you broaden the scope of #1 so as to include other finicky 
dependencies beyond protobuf that might benefit being hidden from 
downstreamers. Could be done in another issue but suggest be careful you don't 
fence off the possibility (perhaps hadoop-thirdparty rather than 
hadoop-shaded-thirdparty as repo name?).

bq. Release process: can it be issued by the ASF?

Why not? Would suggest it an artifact treated as any other shipped by this PMC. 
You'd generate an RC and vote on it (This is how hbase PMC does it).

bq. There are many javac warnings due to new protobuf-3.6.1 dependency due to 
deprecated APIs usage.

Isn't there a flag to turn these off (IIRC).

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-06 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924511#comment-16924511
 ] 

stack commented on HADOOP-13363:


Perhaps this pom helps? 
https://github.com/apache/hbase/blob/master/hbase-protocol-shaded/pom.xml Its 
the hackery from hbase generating shaded pb module inline w/ main build. PB 
files are generated on the fly using the godsend protobuf-maven-plugin plugin 
which pulls the appropriate protoc at build time (so no need to set up a protoc 
or protoc path). The replacer plugin rewrites generated pbs so shaded and in 
place for downstream modules at build time. I remember getting ordering and 
shading correct was a pain but have forgotten the details unfortunately.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Support Opentracing

2019-05-07 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835080#comment-16835080
 ] 

stack commented on HADOOP-15566:


Thanks for the pointer [~bogdandrutu]

> Support Opentracing
> ---
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-08-15 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581273#comment-16581273
 ] 

stack commented on HADOOP-15566:


[~elek] Thanks. Or we could just strip htrace. This would remove any friction 
caused by its injection. This would address the issue title and bulk of the 
description. 

> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-07-30 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562619#comment-16562619
 ] 

stack commented on HADOOP-15566:


bq. I just think we're going to be hard-pressed to make an informed decision 
without pairings of trace visualizations (ideally in many tracing systems to 
illustrate portability) and the respective instrumentation code to illustrate 
non-bloat / maintainability stack you were suggesting we try this on dev – 
any pointers to a non-HDFS / non-HBase expert for a place to focus on for such 
an exercise?

Yeah. I just started a DISCUSS thread that points here up on dev-common. 
Hopefully, we'll attract doers/volunteers.

What you thinking [~bensigelman]? You (or your company) running a compare of 
libs -- OT/OC/Hacked HTrace -- for a neutral party/volunteer to evaluate?

bq. I wonder if it's would be worth evaluating writing a 
htrace-api->opentracing-java or htace-api->census or htrace-api->zipkin...

I just did a refresher and unfortunately it'd be a bit of awkward work to do 
[~michaelsembwever]. HTrace core entities -- probably the font of friction 
(We'd have to check; we could for sure do some fixup around when no trace 
enabled) -- are classes rather than Interfaces and do work passing Spans though 
no trace enabled. The other awkward fact is that there are two htrace APIs 
afloat in Hadoop currently, an htrace3 in older Hadoops and an htrace4 (though 
in different packages).

Getting traces into zipkin though should be easy enough. htrace dumps to 
spanreceiver implementations and these are easy to write and plugin.

[~bogdandrutu] Thanks boss for the OC input. The local-view (z-pages) makes 
sense. Nice instrumentation example over in the hbase client for talking to 
(cloud) bigtable too (smile) -- 
https://github.com/GoogleCloudPlatform/cloud-bigtable-client.







> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-07-28 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16561007#comment-16561007
 ] 

stack commented on HADOOP-15566:


Thanks for the input [~michaelsembwever].

bq.  as the effort is more in adding the instrumentation code in the first 
place, and not so much writing the abstraction layer.

Agree

bq. With Cassandra ...of maintaining the existing tracing code as the 
abstraction layer, and allowing plugins to it.

Thats this stuff: 
https://github.com/apache/cassandra/tree/trunk/src/java/org/apache/cassandra/tracing
 ?

Could try re-emitting existing (h)traces to zipkin -- it used to work -- or 
whatever sink. Would also need to fix it so trace inserts are friction-free 
when disabled (currently they drag).



> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-07-23 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553144#comment-16553144
 ] 

stack commented on HADOOP-15566:


For me, the hard part is not which tracing lib to use -- if a tracing lib 
discussion, lets do it out on dev? We should also invite others to the 
discussion -- but rather discussion around resourcing:

 * Ensuring traces tell a good narrative across the different code paths and 
over processes, and that trace paths remain intact across code churn; they are 
brittle and easily broken/disconnected as dev goes on.
* Instrumenting/coverage -- inserting trace points is time consuming whose 
value is only realized down-the-road by operator/dev trying to figure a 
slowdown (so the https://github.com/opentracing-contrib/java-tracerresolver 
looks interesting).
* Tooling to enable tracing and visualize needs to be easy-to-deploy and use 
else all will go to rot (Some orgs trace every transaction with a simple switch 
for dumping to visualizer that is up and always available..)
* Ensuring traces are friction-free else they'll be removed or not taken-on in 
the first place.
* Evangelizing and pushing trace across hadoop components; the more components 
instrumented, the more we all will benefit.

Thanks.

> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-09-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177943#comment-16177943
 ] 

stack commented on HADOOP-13866:


This issue is no longer relevant. HBase shades its netty dependency. IMO this 
can be closed.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-09-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167304#comment-16167304
 ] 

stack commented on HADOOP-13363:


I'd presume upgrade but run in pb2 mode, the default. That has been wire compat 
in my limited experience. We could do a bit of testing I suppose.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16129662#comment-16129662
 ] 

stack commented on HADOOP-14284:


The particular example you cite goes away if hadoop embeds hbase2 since hbase 
will have relocated the problematic libs, but yeah, until then,  embedding 
hbase has this problem as will other downstreamers

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126355#comment-16126355
 ] 

stack commented on HADOOP-14284:


bq. Does HBase shade across all code base, instead of just in client modules?

The whole code base.

We offer a shaded client jar for those contained to our public facing client 
API. We've done a bad job evangelizing it up to this but intend to go at it 
with gusto with our next major release on out. This project should work for 
those confined to our client API but as with hadoop, hbase has facets other 
than that of the client API. This is when the story gets messy; i.e. plugins or 
clients reading/writing hbase files apart from an hbase instance. Shading our 
internals helps as we avoid the possibility of clashes. On our backend, we also 
are too-tightly tied to our upstream, a project we are working to undo, but we 
are not there yet. The internal shading helps here too.

Using enforcer has been suggested over in our project. Will pass on our 
experience if anything noteworthy.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126187#comment-16126187
 ] 

stack commented on HADOOP-14284:


If it helps, here is what was done over in hbase to make it so we could upgrade 
guava, netty, protobuf, etc., w/o damage to downstreamers or having to use 
whatever hadoop et al. happened to have on the CLASSPATH

 * We made a little project hbase-thirdparty. Its only charge is providing 
mainline hbase with relocated popular libs such as guava and netty. The project 
comprises nought but poms (caveat some hacky patching of protobuf our project 
requires): https://github.com/apache/hbase-thirdparty The pull and relocation 
is moved out of the mainline hbase build.
 * We changed mainline hbase to use relocated versions of popular libs. This 
was mostly a case of changing imports from, for example, 
com.google.protobuf.Message to 
org.apache.hadoop.hbase.shaded.com.google.protobuf.Message (an unfortunate 
decision a while back saddled us w/ the extra-long relocation prefix).
 * As part of the mainline build, we run com.google.code.maven-replacer-plugin 
to rewrite third-party references in generated code to instead refer to our 
relocated versions.

Upside is we can update core libs whenever we wish. Should a lib turn 
problematic, we can add it to the relocated set. Downside is having to be sure 
we always refer to the relocated versions in code.

While the pattern is straight-forward, the above project took a good while to 
implement mostly because infra is a bit shakey and our test suite has a host of 
flakies in it; verifying the test was failiing because it a flakey and not 
because of the relocation took a good while.

If you want to do similar project in hadoop, I'd be game to help out.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001566#comment-16001566
 ] 

stack commented on HADOOP-14284:


bq. ...get rid of it

This doesn't seem right either mighty [~djp]. Guava is a high quality lib. It 
is well tested with a particular attention paid to perf. I'd think we'd want to 
double-down on libs of this type rather than move to purge it. Would 
netty/protobuf/etc. be next in line sir?

[~ozawa] Have you brought up the project in an IDE with your patch applied? My 
expectation is that there will reams of complaint that the relocated guava 
can't be found 

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996980#comment-15996980
 ] 

stack commented on HADOOP-14284:


bq. However, I found one problem in this approach: shaded artifacts(shaded 
Guava and Curator) in hadoop-shaded-thirdparty is NOT in classpath, if I 
understand correctly. 

Tell us more please. Shading bundles the relocated .class files of guava and 
curator; they are included in the thirdparty jar... and the thirdparty jar is 
on the classpath, no?

Perhaps you are referring to the downsides listed in the comment 'stack added a 
comment - 06/Apr/17 00:57' over in HADOOP-13363 where IDEs will not be able to 
find the shaded imports? For this reason hbase-protocol-shaded includes 
relocated src (it does a build because we patch protobuf3).

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969481#comment-15969481
 ] 

stack commented on HADOOP-14284:


HBase cheats. It has an unofficial pre-build step, the product of which gets 
checked in so it is available build time (pre-build does stuff like generate 
class files from protos, shade and patch protobufs so we can run with 
pb3.Changes requiring rerun of pre-build are rare). This is messy. We are 
discussing formalizing pre-build by starting an ancillary project run by the 
HBase PMC. We'd freight this hbase-3rdparty w/ all of our unofficial pre-build 
malarky. We also have the 'guava-problem' (and the netty-problem, etc.) and 
need a soln. Current intent is that hbase-3rdparty includes shaded versions of 
critical libs (guava, netty, protobuf). Mainline hbase then just includes the 
hbase-3rdparty artifact This is a WIP.

That referenced Curator TN is an interesting read. Curator made an unfortunate 
mistake (been there). Propagating their incomplete fix here is unfortunate (Can 
we depend on a Curator that has complete 'fix'' in hadoop3 or just kick out 
Curator?).

bq. Shading Guava inside hadoop-client-modules to shade all Guava unlike 
hadoop-shaded-thirdparty.

Does this mean we'd have guava in hadoop-client-modules and in 
hadoop-shaded-thirdparty? What you thinking [~ozawa]? Thanks.
 

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-04-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958358#comment-15958358
 ] 

stack commented on HADOOP-13363:


You can ask shading to only bundle required classes which should cut down on 
some of the duplicates and while I know we're not averse to carrying around a 
bit of fat in these parts, it does seem like we should try and avoid repeating 
pb classes seven times (counting above listed artifacts), at least.

How you thinking of shading guava?

bq. stack could you turn over this issue?

I'm a bit stuck at mo for spare cycles

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-04-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958253#comment-15958253
 ] 

stack commented on HADOOP-13363:


[~kihwal] Good point.

What we thinking for a shading approach?

Shading runs post package and pulls in the relocated classes into the jar that 
references them.

I see hadoop common, hdfs, hdfs-client, mr-client, yarn-common, yarn-client, 
yarn-server all making reference to pb (I didn't look at src/test). We thinking 
we'd dupe instances of pb jars up into each of these artifacts.

Would it make sense having a shaded-3rdparty-libs jar that had relocated 
protobuf and other libs we want to shade? Downside is all pb references would 
have to be changed to reference the relocated classes 
(org.apache.hadoop.com.google.protobuf.*).

Would hadoop3 ship a pb2.5 at all?

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946364#comment-15946364
 ] 

stack commented on HADOOP-13363:


[~ozawa] Yes sir. Default mode is proto2. You have to explicitly ask for 
proto3. When protoc runs, it emits a WARNING that the .proto does not have an 
explicit version designation but it is just a WARNING that can be quelled by 
explicitly stating protoc2 (we did not bother doing this in our project -- not 
yet at least).

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-03-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944328#comment-15944328
 ] 

stack commented on HADOOP-13363:


HBase uses pb3 internally because you can avoid having to copy all bytes making 
a Message ("zero-copy serialization") at least on output (working on same for 
input) and because it has support for ByteBuffers. pb3 in pb2 mode -- the 
default -- is wire compatible in our tests.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859123#comment-15859123
 ] 

stack commented on HADOOP-13866:


+1 for master branch then (after update to latest 4.1.x). We'll need to figure 
out what to do for branch-2... Thanks.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15856940#comment-15856940
 ] 

stack commented on HADOOP-13866:


Latest netty release on 4.1 is 4.1.8, not 4.1.1. Might want to go to it.

bq. From stack and Andrew's comments above, it seems we want this in branch-2, 
but that should better go with HADOOP-14043. 

Maybe clarity from [~andrew.wang] He seems to be saying it needs to be shaded 
everywhere -- branch-2 through branch-3 through trunk. Maybe I'm misreading...

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851878#comment-15851878
 ] 

stack commented on HADOOP-13866:


I tried the patch here against the hadoop 2.7 branch and it fixed my netty 
issue running hbase master.

+1 on commit to trunk and hadoop3 branch.

Can we get it in 2.8 branch too? How can I help?

It would be obnoxious asking for it to be committed to 2.7. As is hbase-2 won't 
work with anything without this patch (unless hbase shades its netty). I'll ask 
anyways.

Here is some background on how the incompatibillity came in, in case it helps:

The clashing io.netty 4 (4.0.23.Final) gets added to hadoop with below commit 
that lands before the release of hadoop-2.7.0.
commit bbdd990864d677e99b8fc73bdf720d66e2187d2c
Author: Haohui Mai 
Date: Tue Oct 28 16:53:53 2014 -0700
HDFS-7280. Use netty 4 in WebImageViewer. Contributed by Haohui Mai.

The hbase commit that moves us from a compatible 4.0.30 to an incompatible 
4.1.1 is:
commit bd45cf34762332a3a51f605798a3e050e7a1e62e
Author: Jurriaan Mous 
Date: Fri Jun 10 17:57:42 2016 +0200
HBASE-16004 Update to Netty 4.1.1
Signed-off-by: stack 

Maybe we could go back but we will be leaning heavily on netty by the time 2.0 
ships (It is basis of new async client, it is used in our async dfsclient 
implementation -- hopefully moved back up to hdfs at some point but for now 
living in hbase -- and it is looking like netty will be our new rpc chassis too 
by the time 2.0 ships).

hadoop-2.6.x doesn't have HDFS-7280 commit so hbase-2.0.0 should work with it, 
but not later versions of hadoop.

I can test other combos if that will help get stuff committed. Just say. Thanks.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15840614#comment-15840614
 ] 

stack commented on HADOOP-13433:


Thanks [~steve_l] for taking care of the commit.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at 

[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15836380#comment-15836380
 ] 

stack commented on HADOOP-13433:


So, patch is only suited to branch-3 [~Apache9]? Worth a version w/ UTs that 
will work for branch-2? What you think of the failed UT above? Is it related? 
Thanks.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>
> Attachments: HADOOP-13433.patch, HADOOP-13433-v1.patch, 
> HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, HADOOP-13433-v5.patch, 
> HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at 

[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812310#comment-15812310
 ] 

stack commented on HADOOP-13433:


Should the test case be integrated into the patch [~Apache9]?

Have you deployed this fix on your clusters?

Patch LGTM. 

Any opinion mighty [~steve_l]?

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46)
> 

[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2016-10-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587093#comment-15587093
 ] 

stack commented on HADOOP-13363:


Did a ML discussion happen?

pb3.1.0 is out. It runs in a 2.5.0 compatibility mode by default. Has some 
facility for saving on data copying that might be of interest in the NN. If 
upgrading, you need to run the newer protoc. Newer lib can't read the protos 
made by older protoc (IIRC). Newer protoc, in my experience, has no problem 
digesting pb 2.5.0 .proto files. The generated files are a little different, 
not consumable by the old protobuf lib.

Would this be a problem? Old clients can talk to the new servers because of 
wire compatible. Is anyone consuming hadoop protos directly other than hadoop? 
Are hadoop proto files  considered InterfaceAudience.Private or 
InterfaceAudience.Public? If the former, I could work on a patch for 3.0.0 
(It'd be big but boring). Does Hadoop have Protobuf in its API anywhere (I can 
take a look but being lazy asking here first).

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15323884#comment-15323884
 ] 

stack commented on HADOOP-12910:


bq. Chaining/callbacks are nonessential in the sense that they can possibly be 
provided by other library but async is not. 

I think we have different understandings of what an async API is. IMO returning 
a Future is not enough to call an API async; callbacks are table stakes.

bq. I want to support chaining/callbacks but may not be necessarily in the 
first step. 

Nod. My understanding/experience is that downstreamers would have one way of 
consuming an API that returned futures only and then another one altogether of 
a different form to make use of callbacks. Would rather do the Async HDFS 
hookup one time only.

bq. ...but ListenableFuture using the same approach was developed recently.

It's license says 2007 (though commit seems to be 2009 
https://github.com/google/guava/commit/dc5915eb1072c61ff2c3c704af4ae36b25f97b6c#diff-79a52d8ade7341792c046e9c3a5715e0)
 so it has a bit of age on it it seems.

Nothing 'wrong' with registering a listener/observer; all 
promises/futures/deferred do some form of this. Common complaint is handling 
fast becomes unwieldy... but if registering a callable is all we have, we'd 
deal (hopefully w/o resort to extracurricular libraries)

bq. The "rocket launching" was a bad joke. Sorry.

No worries. I like jokes.

Netty Future is in the CLASSPATH already but would have the same issues as 
ListenableFuture I suppose. It ain't that pretty either.





> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15323118#comment-15323118
 ] 

stack commented on HADOOP-12910:


bq. No, they are the same API. Branch-2 is a simplified version of trunk.

So they are not the same then, one is a 'simplified version'.

I think I can see where you are coming from with your thinking that 
chaining/callbacks are nonessential and with your fixation on 90s-era AWT-style 
APIs registering listeners (that launches rockets). I was just hoping we could 
learn from the past rather than repeat it. I was also hoping that downstreamers 
didn't have to perform contortions or rely on a 'library' to fill in missing, 
essential pieces.

On the latter suggestions, IMO, the pro for ListenableFuture -- that we'd have 
same API in branch-2 and in branch-3 -- outweighs all other concerns. Same for 
Deferred which seems nicer but w/ same cons. CompletableFuture is a large 
undertaking and going by your diggings, seems hard to do w/ our jdk8. Thanks.



> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-07 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318747#comment-15318747
 ] 

stack commented on HADOOP-12910:


Why the insistence on doing the async twice? Once for branch-2 and then with a 
totally different API in branch-3? Wouldn't doing it once be better all around 
given it is tricky at the best of times getting async correct and performant?

Why do the work in branch-2 and then go keep it private, ' if it gets 
complicated...'.? Where does that leave willing contributors/users like 
[~Apache9] (see his note above)?

Why invent an API (based on AWT experience with mouse-moved listeners (?)) 
rather than take on a proven one whose author is trying to help here and whose 
API surface is considerably less than the CompletableFuture kitchen-sink?

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317848#comment-15317848
 ] 

stack commented on HADOOP-12910:


bq. For branch-2, There are two possible ways to use Deferred (or 
ListenableFuture) :

IMO, it is user-abuse if there is one way to consume the HDFS API 
asynchronously in hadoop2 and another manner when hadoop3. Users and 
downstreamers have better things to do w/ their time than build a lattice of 
probing, brittle reflection and alternate code paths to match a wandering HDFS 
API.

bq. Any comments?

IMO, there is nothing undesirable about your choices #1 or #2 above. If you 
don't want a dependency, do #2. Regards #2, it doesn't matter that the Hadoop 
Deferred no longer matches 'other projects'. It doesn't have to. It is the 
proven API with a provenance and the clean documentation on how to use and what 
the contract is that we are after.

As to your FutureWithCallback, where does this come from? Have you built any 
event-driven apps with it? At first blush, it is lacking in vocabulary at least 
when put against Deferred or CompletableFuture. Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314599#comment-15314599
 ] 

stack commented on HADOOP-12910:


Back now [~cnauroth]

bq. ...whereas the scope of this issue has focused on asynchronous NameNode 
metadata operations

The discussion in here is more broad than just NN metadata operations. The 
summary and description would seem to encourage how we will add async to 
FileSystem generally. It seems like a good thing to nail given async is coming 
up in a couple of areas ([~Apache9]'s file ops and these NN calls). They should 
all align on their approach I'd say.

Regards a writeup, [~Apache9] has revived HDFS-916 and added doc on what is 
wanted doing async file ops. High-level, we want to be able to consume an HDFS 
API async, in an event-driven way. A radical experiment that totally replaces 
dfsclient with a simplified, bare-bones implementation that does the minimal 
subset necessary for writing HBase WALs (HBASE-14790) allows us write much 
faster while using less resources. The implementation also does fan-out rather 
than pipeline. This put together with it being barebones -- e.g. we do not want 
to trigger pipeline recovery, it takes too long, if it works -- muddies the 
compare but the general drift is plain.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306068#comment-15306068
 ] 

stack commented on HADOOP-12910:


OpenTSDB is LGPL but Deferred is not. It looks to be BSD. A jar that has 
Deferred only is available up on mvnrepository here: 
http://mvnrepository.com/artifact/com.stumbleupon/async/1.4.1 so don't have to 
copy. If interested, we can ping the author to get further clarity on license.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304159#comment-15304159
 ] 

stack commented on HADOOP-12910:


Copy/Paste of Deferred would work. It does callback, has a respectable ancestry 
(copy of TwistedPython pattern), a long, proven track record used by a few 
projects, it is well-documented, and a narrower API than CompletableFuture so 
less to implement. Downside (minor) is it is not like CompletableFuture.

bq. but if Future is sufficient for the current set of usecases, then let's 
go with this plan.

Future alone is not enough for HBase (and Kudu?). Need callback. Don't want to 
have to consume the async API in one way when going against H2 and then in 
another manner when on top of H3.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303503#comment-15303503
 ] 

stack commented on HADOOP-12910:


bq. Why making the entire async feature unavailable in branch 2? 

I'd think a new API on HDFS with a new semantic showing up late in the life of 
H2 would come as a surprise to most and seems like a natural H3 differentiator. 
But no matter. Sounds like you are targeting H2.

bq. I suggest returning Future (or a sub-interface to support callbacks) in 
branch 2 and CompletableFuture (or our own implementation of CompletableFuture) 
in trunk. In this way, trunk is backward compatible to branch 2 since 
CompletableFuture implements Future.

Suggest it should be same in branch 2 and branch 3 given how much work the spec 
and implementation will be. You can't consume the API in a 
non-blocking/asynchronous way if you can't register a callback. So a Future 
alone is not sufficient, or to put it another way, if H2 returns a Future, 
only, and H3 allows registering callbacks, then the implementation and consumer 
will have to change going from H2 to H3.



> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302854#comment-15302854
 ] 

stack commented on HADOOP-12910:


Good suggestion. We'll be back.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301094#comment-15301094
 ] 

stack commented on HADOOP-12910:


The patch here converts one method only? Is the intent to do all methods (along 
w/ the spec suggested by [~steve_l]?)

Is this issue for an AsyncFileSystem or for an async rename only? Are we 
targeting H3 only or is there some thought that this could get pulled back into 
H2?

bq. Futures are a good match for the use case where the consumer wants to kick 
of a multitude of async requests and wait until they are all done to make 
progress, but we've found that there are also compelling use cases where you 
want a small amount of logic and further async I/O in a completion handler, so 
I might recommend supporting both Future-based results as well as 
callback-based results.

A few of us (mainly [~Apache9]), are looking at being able to go async against 
hdfs. There is already a stripped down async subset of dfsclient that we are 
using to write our WALs done by [~Apache9] that uses way less resources while 
going much faster (see HBASE-14790). As Duo says, we want to push this up into 
HDFS, and given our good experience with this effort, we want to convert over 
more of our HDFS connection to be async. Parking a resource waiting on a Future 
to complete or keeping some list/queue of Futures which we check on a period to 
see if it is 'done' is much less attractive (and less performant) to our being 
instead notified on completion -- a callback (as [~bobhansen] suggests above in 
the comment repeated here).  Ideally we'd like to move our interaction with 
HDFS to be event-driven (ultimately backing this up all the ways into the guts 
of the regionserver, but that is another story)

OK if we put up suggested patch that say presumes jdk8/h3 only and instead of 
returning Future, returns jdk8 CompletableFuture? Chatting yesterday, we think 
we could consume/feed HDFS in a non-blocking way if we got back a 
CompletableFuture (or we could add a callback handler as a param on a method if 
folks preferred that?). We'd put up a sketch patch, and if amenable, we could 
start up a sympathetic spec doc as a subtask so code and spec arrive at the 
same time?

Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12447) Clean up some htrace integration issues

2015-09-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935080#comment-14935080
 ] 

stack commented on HADOOP-12447:


+1 from me. Cleans up doc so it refers to 4.0.1 htrace API.  Fixes configs 
removing no longer referenced keys.

> Clean up some htrace integration issues
> ---
>
> Key: HADOOP-12447
> URL: https://issues.apache.org/jira/browse/HADOOP-12447
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-12447.002.patch
>
>
> Clean up some htrace integration issues



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12403) Enable multiple writes in flight for HBase WAL writing backed by WASB

2015-09-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741083#comment-14741083
 ] 

stack commented on HADOOP-12403:


bq. The latest HBase WAL write model (HBASE-8755) uses multiple AsyncSyncer 
threads to sync data to HDFS.

It would be preferable if we did not have to do this against HDFS Client. A 
single thread doing syncs back-to-back would be ideal but experiment had it 
that 5 threads each running a sync seems to be optimal (throughput-wise) for 
setting up a syncing pipeline. Need to dig in as to why 5 and why this is 
needed at all. Just FYI.

> Enable multiple writes in flight for HBase WAL writing backed by WASB
> -
>
> Key: HADOOP-12403
> URL: https://issues.apache.org/jira/browse/HADOOP-12403
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-12403.01.patch, HADOOP-12403.02.patch, 
> HADOOP-12403.03.patch
>
>
> Azure HDI HBase clusters use Azure blob storage as file system. We found that 
> the bottle neck was during writing to write ahead log (WAL). The latest HBase 
> WAL write model (HBASE-8755) uses multiple AsyncSyncer threads to sync data 
> to HDFS. However, our WASB driver is still based on a single thread model. 
> Thus when the sync threads call into WASB layer, every time only one thread 
> will be allowed to send data to Azure storage.This jira is to introduce a new 
> write model in WASB layer to allow multiple writes in parallel.
> 1. Since We use page blob for WAL, this will cause "holes" in the page blob 
> as every write starts on a new page. We use the first two bytes of every page 
> to record the actual data size of the current page.
> 2. When reading WAL, we need to know the actual size of the WAL. This should 
> be the sum of the number represented by the first two bytes of every page. 
> However looping over every page to get the size will be very slow, 
> considering normal WAL size is 128MB and each page is 512 bytes. So during 
> writing, every time a write succeeds, a metadata of the blob called 
> "total_data_uploaded" will be updated.
> 3. Although we allow multiple writes in flight, we need to make sure the sync 
> threads which call into WASB layers return in order. Reading HBase source 
> code FSHLog.java, we find that every sync request is associated with a 
> transaction id. If the sync succeeds, all the transactions prior to this 
> transaction id are assumed to be in Azure Storage. We use a queue to store 
> the sync requests and make sure they return to HBase layer in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12201) Add tracing to FileSystem#createFileSystem and Globber#glob

2015-07-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619310#comment-14619310
 ] 

stack commented on HADOOP-12201:


+1

I like the pretty picture [~cmccabe]

 Add tracing to FileSystem#createFileSystem and Globber#glob
 ---

 Key: HADOOP-12201
 URL: https://issues.apache.org/jira/browse/HADOOP-12201
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12201.001.patch, createfilesystem.png


 Add tracing to FileSystem#createFileSystem and Globber#glob



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610828#comment-14610828
 ] 

stack commented on HADOOP-12171:


fullClassNameToTraceString looks like utility that belongs in htrace rather 
than in hadoop rpc util. Could add it here for now deprecated to be replaced 
with htrace implementation.

Call it toTraceString or toTraceName or toTraceKey ... since what is passed in 
is not classname, we do more than just shorten the passed String, and our 
output is used keying the trace.

Otherwise, LGTM [~cmccabe]







 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611192#comment-14611192
 ] 

stack commented on HADOOP-12171:


+1 on 002.

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch, HADOOP-12171.002.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11656) Classpath isolation for downstream clients

2015-03-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14347234#comment-14347234
 ] 

stack commented on HADOOP-11656:


bq. To add, I think we can and should strive for doing this in a compatible 
manner, whatever the approach.

Sure. Sounds good if possible at all as well as being a load of work proving 
changes are indeed compatible.

bq. Marking and calling it incompatible before we see proposal/patch seems 
premature to me.

I'd suggest you open a new issue to do classpath isolation in a 'compatible 
manner' rather than add this imposition here. In this issue, the reporter 
thinks it a breaking change (At a minimum we'll break dependency compatibility 
and operational compatibility.). The two issues can move along independent of 
each other.

And to be clear when we talk 'compatible manner', the expectation is that a 
downstream apps, for example hbase, should be able to move from hadoop-2.X to 
hadoop-2.Y without breakage, right? That is, in spite of shading, new locations 
for dependencies, cleaned up exposure of libs likely transitively included, 
etc., there will be no need for downstreamers to add in new compensatory code, 
no need of our having to release special versions to work with hadoop-2.Z, and 
no need of callouts in code or for us to do educate our community's that if on 
hadoop-2.X do this...but if on hadoop-2.Y do that? Or are we talking 
something else (And downstreamers, you are doing it wrong is not allowed).

 Classpath isolation for downstream clients
 --

 Key: HADOOP-11656
 URL: https://issues.apache.org/jira/browse/HADOOP-11656
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: classloading, classpath, dependencies

 Currently, Hadoop exposes downstream clients to a variety of third party 
 libraries. As our code base grows and matures we increase the set of 
 libraries we rely on. At the same time, as our user base grows we increase 
 the likelihood that some downstream project will run into a conflict while 
 attempting to use a different version of some library we depend on. This has 
 already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
 (and I'm sure others).
 While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
 off and they don't do anything to help dependency conflicts on the driver 
 side or for folks talking to HDFS directly. This should serve as an umbrella 
 for changes needed to do things thoroughly on the next major version.
 We should ensure that downstream clients
 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
 doesn't pull in any third party dependencies
 2) only see our public API classes (or as close to this as feasible) when 
 executing user provided code, whether client side in a launcher/driver or on 
 the cluster in a container or within MR.
 This provides us with a double benefit: users get less grief when they want 
 to run substantially ahead or behind the versions we need and the project is 
 freer to change our own dependency versions because they'll no longer be in 
 our compatibility promises.
 Project specific task jiras to follow after I get some justifying use cases 
 written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238432#comment-14238432
 ] 

stack commented on HADOOP-11363:


lgtm

Want to just go for 4G rather than 2G or you thinking that if we OOME on 2G, 
we'll take a look at the dumped heaps to see what is going on?

Thanks [~ste...@apache.org]

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238652#comment-14238652
 ] 

stack commented on HADOOP-11363:


+1

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-12-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231887#comment-14231887
 ] 

stack commented on HADOOP-11301:


Just to say that this patch also fixes an issue in hbase where when a region is 
deleted (split or table removed), metrics would get stuck reporting the last 
state of metrics on the removed region; metrics was showing 'ghosts'. Thanks 
[~maysamyabandeh]

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Fix For: 2.7.0

 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
 HADOOP-11301.v03.patch, HADOOP-11301.v04.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-12-01 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HADOOP-11301:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to branch-2 and trunk (only took me three attempts). Thanks for the 
patch [~maysamyabandeh]

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Fix For: 2.7.0

 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
 HADOOP-11301.v03.patch, HADOOP-11301.v04.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-12-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231091#comment-14231091
 ] 

stack commented on HADOOP-11301:


Thanks [~ram_krish] Fixed.

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
 HADOOP-11301.v03.patch, HADOOP-11301.v04.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-11-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14226768#comment-14226768
 ] 

stack commented on HADOOP-11301:


We cache jmx values for 10ms only. Too short.  The value passed in here is 
value of the overloaded PERIOD_KEY, which is DEFAULT_PERIOD: '10' (says 10 
'seconds' but whats passed in here is '10').  TODO is a config explicitly about 
jmx cache time, not a reuse of the general PERIOD_KEY.  As is, this patch has 
us doing a little more work -- each invocation of updateJMXCache beyond 10ms 
has us doing this:  infoCache = infoBuilder.reset(lastRecs).get() (lastRec was 
being recalculated regardless even before this patch) -- but thats the point, 
jmx stats were stale. I'm +1 on the patch.  Will commit in a day unless 
objection.

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
 HADOOP-11301.v03.patch, HADOOP-11301.v04.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-11-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14224164#comment-14224164
 ] 

stack commented on HADOOP-11301:


I tried the patch and it seems like it is doing as expected,  [~maysamyabandeh].

My test is basic. I just invoke the /jmx page in the UI.  Before the patch, 
with this simple logging inserted:

{code}
@@ -175,7 +175,9 @@ private void updateJmxCache() {
 synchronized(this) {
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
+  LOG.info(Called updateAttrCache);
   if (oldCacheSize  newCacheSize) {
+LOG.info(Called updateInfoCache);
 updateInfoCache();
   }
   jmxCacheTS = Time.now();
{code}

... I would see this output on each request in unpatched case.

{code}
2014-11-24 22:44:41,688 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,689 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,693 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,694 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,694 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,694 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 22:44:41,696 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,699 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 22:44:41,699 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 22:44:41,699 INFO  [1851255134@qtp-1525844775-1] 
impl.MetricsSourceAdapter: Called updateAttrCache
{code}

I then applied the patch plus logging and would get this output on each page 
invocation:

{code}
2014-11-24 23:20:49,145 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,145 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,146 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,146 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,154 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,154 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,155 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,156 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,157 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,157 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,193 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,193 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,203 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,203 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
2014-11-24 23:20:49,203 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateAttrCache
2014-11-24 23:20:49,203 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Called updateInfoCache
{code}

Its same amount of updateAttrCache updates but we are now doing a call to 
updateInfoCache for each updateAttrCache call. Thats as expected I believe.

I was thinking that we were refreshing attrs and the bean on each invocation 
but added this:

{code}
@@ -163,6 +163,7 @@ private void updateJmxCache() {
 }
   }
   else {
+LOG.info(Returned w/o updateAttrCache);
 return;
   }
 }
{code}

... and then see this:

{code}
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Returned w/o updateAttrCache
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Returned w/o updateAttrCache
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Returned w/o updateAttrCache
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Returned w/o updateAttrCache
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 
impl.MetricsSourceAdapter: Returned w/o updateAttrCache
2014-11-24 23:45:22,002 INFO  [2113243119@qtp-1525844775-0] 

[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-11-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221485#comment-14221485
 ] 

stack commented on HADOOP-11301:


Looks much cleaner. Should these two statements be swapped?

{code}
176   updateAttrCache();
177   if (getAllMetrics) {
{code}

... so we only updateAttrCache if getAllMetrics is true? 

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-11-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14221610#comment-14221610
 ] 

stack commented on HADOOP-11301:


Lets go with your conservative suggestion.  It will make the test pass but in 
practice we will return early when updateJmxCache is called if jmxCacheTTL has 
not yet elapsed.  Is that how you read it [~maysamyabandeh]? Thanks.

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Attachments: HADOOP-11301.v01.patch, HADOOP-11301.v02.patch, 
 HADOOP-11301.v03.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11301) [optionally] update jmx cache to drop old metrics

2014-11-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220356#comment-14220356
 ] 

stack commented on HADOOP-11301:


Agree with [~andrew.wang]. No need of a config.  Can you just update the info 
bean if getAllMetrics is true otherwise leave the old bean in place?  Thanks.

 [optionally] update jmx cache to drop old metrics
 -

 Key: HADOOP-11301
 URL: https://issues.apache.org/jira/browse/HADOOP-11301
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Maysam Yabandeh
Assignee: Maysam Yabandeh
 Attachments: HADOOP-11301.v01.patch


 MetricsSourceAdapter::updateJmxCache() skips updating the info cache if no 
 new metric is added since last time:
 {code}
   int oldCacheSize = attrCache.size();
   int newCacheSize = updateAttrCache();
   if (oldCacheSize  newCacheSize) {
 updateInfoCache();
   }
 {code}
 This behavior is not desirable in some applications. For example nntop 
 (HDFS-6982) reports the top users via jmx. The list is updated after each 
 report. The previously reported top users hence should be removed from the 
 cache upon each report request.
 In our production run of nntop we made a change to ignore the size check and 
 always perform updateInfoCache. I am planning to submit a patch including 
 this change. The feature can be enabled by a configuration parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10938) Remove thread-safe description in PositionedReadable javadoc

2014-11-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14195829#comment-14195829
 ] 

stack commented on HADOOP-10938:


I think this issue is going in the wrong direction.  PositionedReadable should 
be thread-safe.  It has been claimed so since its beginnings and on the 
canonical implementation, HDFS, it has always been if not very 'live'.  Rather, 
I'd suggest we mark implementations that are not thread-safe at fault and fix 
or undo their PositionedReadable claims if they can't positioned read in a 
thread-safe way.

 Remove thread-safe description in PositionedReadable javadoc
 

 Key: HADOOP-10938
 URL: https://issues.apache.org/jira/browse/HADOOP-10938
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-10938.001.patch


 According to discussion in HDFS-6813, we may need to remove thread-safe 
 description in PositionedReadable javadoc, since DFSInputStream, 
 WebhdfsFileSystem#inputStream, HarInputStream don't implement them with 
 thread-safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11186) documentation should talk about hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188803#comment-14188803
 ] 

stack commented on HADOOP-11186:


+1 Small doc fix. Thanks [~cmccabe]

 documentation should talk about hadoop.htrace.spanreceiver.classes, not 
 hadoop.trace.spanreceiver.classes
 -

 Key: HADOOP-11186
 URL: https://issues.apache.org/jira/browse/HADOOP-11186
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: 0001-HADOOP-11186.patch


 The documentation should talk about hadoop.htrace.spanreceiver.classes, not 
 hadoop.trace.spanreceiver.classes (note the H)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13943307#comment-13943307
 ] 

stack commented on HADOOP-10410:


bq.  I hope we can just do technical discussion in the jira. 

We can.

[~xieliang007] I think [~wheat9] may not be up on how hbase uses hdfs.  Give 
him some slack.  Do you agree then that his suggestion of a QoS pool, 
HDFS-5727, the right way to proceed?

Thanks.

 Support ioprio_set in NativeIO
 --

 Key: HADOOP-10410
 URL: https://issues.apache.org/jira/browse/HADOOP-10410
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HADOOP-10410.txt


 It would be better to HBase application if HDFS layer provide a fine-grained 
 IO request priority. Most of modern kernel should support ioprio_set system 
 call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13943894#comment-13943894
 ] 

stack commented on HADOOP-10410:


bq. Why not get some experimental results, and then decide whether to commit it?

[~xieliang007] Do you have s hacked up hbase patch that makes use of these 
nativeio additions?  Good on you.

 Support ioprio_set in NativeIO
 --

 Key: HADOOP-10410
 URL: https://issues.apache.org/jira/browse/HADOOP-10410
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HADOOP-10410.txt


 It would be better to HBase application if HDFS layer provide a fine-grained 
 IO request priority. Most of modern kernel should support ioprio_set system 
 call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13942738#comment-13942738
 ] 

stack commented on HADOOP-10015:


I'd be +1 on DEBUG (Opening a new issue to discuss what level a message should 
be logged at is OTT).

 UserGroupInformation prints out excessive ERROR warnings
 

 Key: HADOOP-10015
 URL: https://issues.apache.org/jira/browse/HADOOP-10015
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Nicolas Liochon
 Attachments: 10015.v3.patch, 10015.v4.patch, HADOOP-10015.000.patch, 
 HADOOP-10015.001.patch, HADOOP-10015.002.patch


 In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
 it catches an exception.
 However, it prints benign warnings in the following paradigm:
 {noformat}
  try {
 ugi.doAs(new PrivilegedExceptionActionFileStatus() {
   @Override
   public FileStatus run() throws Exception {
 return fs.getFileStatus(nonExist);
   }
 });
   } catch (FileNotFoundException e) {
   }
 {noformat}
 For example, FileSystem#exists() follows this paradigm. Distcp uses this 
 paradigm too. The exception is expected therefore there should be no ERROR 
 logs printed in the namenode logs.
 Currently, the user quickly finds out that the namenode log is quickly filled 
 by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
 behavior confuses the operators.
 This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13939500#comment-13939500
 ] 

stack commented on HADOOP-10410:


Yeah [~xieliang007]... offline Todd suggested trying this.

 Support ioprio_set in NativeIO
 --

 Key: HADOOP-10410
 URL: https://issues.apache.org/jira/browse/HADOOP-10410
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie

 It would be better to HBase application if HDFS layer provide a fine-grained 
 IO request priority. Most of modern kernel should support ioprio_set system 
 call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10410) Support ioprio_set in NativeIO

2014-03-18 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13939640#comment-13939640
 ] 

stack commented on HADOOP-10410:


[~xieliang007] It'd be cool to try.  One thought I had was turning off all 
compaction in the hbase process and running the compaction externally (there is 
a compaction 'tool' in hbase) and see how ionice'ing this process does (I don't 
even know if ionice even uses ioprio_set?)

 Support ioprio_set in NativeIO
 --

 Key: HADOOP-10410
 URL: https://issues.apache.org/jira/browse/HADOOP-10410
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie

 It would be better to HBase application if HDFS layer provide a fine-grained 
 IO request priority. Most of modern kernel should support ioprio_set system 
 call now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10337) ConcurrentModificationException from MetricsDynamicMBeanBase.createMBeanInfo()

2014-03-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HADOOP-10337:
---

   Resolution: Fixed
Fix Version/s: 2.4.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2.4, branch-2 and to trunk.  Thank you for the patch 
[~xieliang007] and the review Mr [~atm]

 ConcurrentModificationException from MetricsDynamicMBeanBase.createMBeanInfo()
 --

 Key: HADOOP-10337
 URL: https://issues.apache.org/jira/browse/HADOOP-10337
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.0, 2.2.0
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 2.4.0

 Attachments: HADOOP-10337.txt


 This stack trace came from our HBase 0.94.3 production env:
 2014-02-11,17:34:46,562 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting 
 attribute 
 tbl.micloud_gallery_albumsharetag_v2.region.96a6d0bc9f0153e0e1ec0318b39ecc45.next_histogram_99th_percentile
  of hadoop:service=RegionServer,name=RegionServerDynamicStatistics threw an 
 exception
 javax.management.RuntimeMBeanException: 
 java.util.ConcurrentModificationException
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:856)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:869)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:670)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:638)
 at 
 org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:315)
 at 
 org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:293)
 at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:193)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1057)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 Caused by: java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.createMBeanInfo(MetricsDynamicMBeanBase.java:87)
 at 
 org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.updateMbeanInfoIfMetricsListChanged(MetricsDynamicMBeanBase.java:78)
 at 
 org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase.getAttribute(MetricsDynamicMBeanBase.java:138)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)
 ... 27 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887263#comment-13887263
 ] 

stack commented on HADOOP-10313:


Alejandro, you want to add a bit of a comment on the head of the script 
explaining what it does and in which context it is used (should you say how to 
use it since it takes a RC_LABEL)? 

I tried the below manually and it works nicely:

HADOOP_VERSION=`cat pom.xml | grep version | head -1 | sed 's|^ 
*version||' | sed 's|/version.*$||'`

nit: remove the 'for' in following if you are going to make an new version: 
version for to (from a comment).

I suppose you have the md5 so can check when you download so you have some 
security about what it is that you are signing.

Otherwise looks great [~tucu00].  We have scripts building a release.  We 
should try and do as you do here and hoist them up to jenkins too.

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10313) Script and jenkins job to produce Hadoop release artifacts

2014-01-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887474#comment-13887474
 ] 

stack commented on HADOOP-10313:


v2 lgtm

 Script and jenkins job to produce Hadoop release artifacts
 --

 Key: HADOOP-10313
 URL: https://issues.apache.org/jira/browse/HADOOP-10313
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.3.0

 Attachments: HADOOP-10313.patch, create-release.sh, create-release.sh


 As discussed in the dev mailing lists, we should have a jenkins job to build 
 the release artifacts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13884262#comment-13884262
 ] 

stack commented on HADOOP-10255:


[~sureshms] Thanks for committing.  On 'We need to agree upon a release to 
align this change.', lets align on hadoop3 (hbase 0.96/0.98 depend on 
httpserver and should be able to run on any 2.x hadoops).  Thanks.

 Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
 compatibility
 -

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >