[jira] [Commented] (FLINK-35440) unable to connect tableau to jdbc flink url using flink sql driver

2024-05-23 Thread Zach (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849132#comment-17849132
 ] 

Zach commented on FLINK-35440:
--

After the above pull request, a new error is seen
```\{"ts":"2024-05-23T20:59:19.463","pid":97497,"tid":"7120f","sev":"info","req":"-","sess":"-","site":"-","user":"-","k":"end-protocol.query","l":{},"a":\{"depth":4,"elapsed":0.995,"exclusive":0.995,"id":"P8RI+WqHEfuLJSiVnHo5Mu","name":"protocol.query","rk":"exception","root":"DhXdXdGsUU/JYsr0MZE8iE","rv":{"e-code":"0xFAB9A2C5","e-source":"NeedsClassification","e-status-code":"2","msg":"\"java.lang.ClassCastException:
 class java.lang.Integer cannot be cast to class java.lang.Long 
(java.lang.Integer and java.lang.Long are in module java.base of loader 
'bootstrap')\n\"","type":"ConnectivityException"},"sponsor":"JhI/Z+7h0BrIJHSP0v9JKb","type":"end"},"v":\{"cols":0,"is-command":false,"protocol-class":"genericjdbc","protocol-id":2,"query-category":"Metadata","query-hash":2970729448,"query-tags":"","query-trunc":"SELECT
 1","rows":0},"ctx":{}}```

> unable to connect tableau to jdbc flink url using flink sql driver
> --
>
> Key: FLINK-35440
> URL: https://issues.apache.org/jira/browse/FLINK-35440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / JDBC
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Zach
>Priority: Minor
>  Labels: pull-request-available
>
> Tableau 2023.1 using 
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/]
>  version 1.19.0 yields the following error when a connection is established 
> to a local flink sql cluster using the uri 
> {{{}jdbc:{}}}{{{}[flink://localhost:8083]{}}}
> {{{"ts":"2024-05-23T14:21:05.858","pid":12172,"tid":"6a70","sev":"error","req":"-","sess":"-","site":"-","user":"-","k":"jdbc-error","e":\{"excp-error-code":"0xFAB9A2C5","excp-source":"NeedsClassification","excp-status-code":"UNKNOWN"},"v":\{"context":"GrpcProtocolProxy::IsConnected
>  
> (D:\\tc\\work\\t231\\g_pc\\modules\\connectors\\tabmixins\\main\\db\\GrpcProtocolProxy.cpp:456)","driver-name":"org.apache.flink.table.jdbc.FlinkDriver","driver-version":"1.19.0","error-code":"0","error-messages":["FlinkConnection#isValid
>  is not supported 
> yet."],"grpc-status-code":"2","protocol-id":3,"sql-state":"0"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the APIs in utilities module the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Description: BaseErrorTableWriter, HoodieStreamer, StreamSync, etc., are 
public API classes and contain public API methods, which should be kept the 
same as before.  (was: BaseErrorTableWriter is a public API class which should 
be kept the same as before.)

> Keep the APIs in utilities module the same as before HoodieStorage abstraction
> --
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Blocker
>  Labels: hoodie-storage
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter, HoodieStreamer, StreamSync, etc., are public API 
> classes and contain public API methods, which should be kept the same as 
> before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the APIs in utilities module the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Summary: Keep the APIs in utilities module the same as before HoodieStorage 
abstraction  (was: Keep the BaseErrorTableWriter APIs the same as before 
HoodieStorage abstraction)

> Keep the APIs in utilities module the same as before HoodieStorage abstraction
> --
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Blocker
>  Labels: hoodie-storage
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class which should be kept the same as 
> before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (HUDI-4491) Re-enable TestHoodieFlinkQuickstart

2024-05-23 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen closed HUDI-4491.

Resolution: Fixed

Fixed via master branch: 8d4a35b1f2e60457cc4316b82c0e1b221ac1ca7e

> Re-enable TestHoodieFlinkQuickstart 
> 
>
> Key: HUDI-4491
> URL: https://issues.apache.org/jira/browse/HUDI-4491
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Shawn Chang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.0.0
>
>
> This test was disabled before due to its flakiness. We need to re-enable it 
> again



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (SCB-2880) able to inherit trace in edge service from web

2024-05-23 Thread liubao (Jira)


 [ 
https://issues.apache.org/jira/browse/SCB-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubao resolved SCB-2880.
-
Resolution: Fixed

> able to inherit trace in edge service from web
> --
>
> Key: SCB-2880
> URL: https://issues.apache.org/jira/browse/SCB-2880
> Project: Apache ServiceComb
>  Issue Type: New Feature
>  Components: Java-Chassis
>Reporter: liubao
>Assignee: liubao
>Priority: Major
> Fix For: java-chassis-3.1.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (SCB-2879) tracing supporting write local logs and improve trace information

2024-05-23 Thread liubao (Jira)


 [ 
https://issues.apache.org/jira/browse/SCB-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubao resolved SCB-2879.
-
Resolution: Fixed

> tracing supporting write local logs and improve trace information
> -
>
> Key: SCB-2879
> URL: https://issues.apache.org/jira/browse/SCB-2879
> Project: Apache ServiceComb
>  Issue Type: New Feature
>  Components: Java-Chassis
>Reporter: liubao
>Assignee: liubao
>Priority: Major
> Fix For: java-chassis-3.1.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-4491) Re-enable TestHoodieFlinkQuickstart

2024-05-23 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated HUDI-4491:
-
Fix Version/s: 1.0.0

> Re-enable TestHoodieFlinkQuickstart 
> 
>
> Key: HUDI-4491
> URL: https://issues.apache.org/jira/browse/HUDI-4491
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Shawn Chang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.0.0
>
>
> This test was disabled before due to its flakiness. We need to re-enable it 
> again



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (SCB-2883) support trace id header in response

2024-05-23 Thread liubao (Jira)


 [ 
https://issues.apache.org/jira/browse/SCB-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubao resolved SCB-2883.
-
Resolution: Fixed

> support trace id header in response
> ---
>
> Key: SCB-2883
> URL: https://issues.apache.org/jira/browse/SCB-2883
> Project: Apache ServiceComb
>  Issue Type: New Feature
>  Components: Java-Chassis
>Reporter: liubao
>Assignee: liubao
>Priority: Major
> Fix For: java-chassis-3.1.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (SCB-2882) add services discovery api and support global instance id bean

2024-05-23 Thread liubao (Jira)


 [ 
https://issues.apache.org/jira/browse/SCB-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubao resolved SCB-2882.
-
Resolution: Fixed

> add services discovery api and support global instance id bean
> --
>
> Key: SCB-2882
> URL: https://issues.apache.org/jira/browse/SCB-2882
> Project: Apache ServiceComb
>  Issue Type: New Feature
>  Components: Java-Chassis
>Reporter: liubao
>Assignee: liubao
>Priority: Major
> Fix For: java-chassis-3.1.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35440) unable to connect tableau to jdbc flink url using flink sql driver

2024-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35440:
---
Labels: pull-request-available  (was: )

> unable to connect tableau to jdbc flink url using flink sql driver
> --
>
> Key: FLINK-35440
> URL: https://issues.apache.org/jira/browse/FLINK-35440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / JDBC
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Zach
>Priority: Minor
>  Labels: pull-request-available
>
> Tableau 2023.1 using 
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/]
>  version 1.19.0 yields the following error when a connection is established 
> to a local flink sql cluster using the uri 
> {{{}jdbc:{}}}{{{}[flink://localhost:8083]{}}}
> {{{"ts":"2024-05-23T14:21:05.858","pid":12172,"tid":"6a70","sev":"error","req":"-","sess":"-","site":"-","user":"-","k":"jdbc-error","e":\{"excp-error-code":"0xFAB9A2C5","excp-source":"NeedsClassification","excp-status-code":"UNKNOWN"},"v":\{"context":"GrpcProtocolProxy::IsConnected
>  
> (D:\\tc\\work\\t231\\g_pc\\modules\\connectors\\tabmixins\\main\\db\\GrpcProtocolProxy.cpp:456)","driver-name":"org.apache.flink.table.jdbc.FlinkDriver","driver-version":"1.19.0","error-code":"0","error-messages":["FlinkConnection#isValid
>  is not supported 
> yet."],"grpc-status-code":"2","protocol-id":3,"sql-state":"0"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35440) unable to connect tableau to jdbc flink url using flink sql driver

2024-05-23 Thread Zach (Jira)
Zach created FLINK-35440:


 Summary: unable to connect tableau to jdbc flink url using flink 
sql driver
 Key: FLINK-35440
 URL: https://issues.apache.org/jira/browse/FLINK-35440
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / JDBC
Affects Versions: 1.19.0, 1.20.0
Reporter: Zach


Tableau 2023.1 using 
[https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/]
 version 1.19.0 yields the following error when a connection is established to 
a local flink sql cluster using the uri 
{{{}jdbc:{}}}{{{}[flink://localhost:8083]{}}}

{{{"ts":"2024-05-23T14:21:05.858","pid":12172,"tid":"6a70","sev":"error","req":"-","sess":"-","site":"-","user":"-","k":"jdbc-error","e":\{"excp-error-code":"0xFAB9A2C5","excp-source":"NeedsClassification","excp-status-code":"UNKNOWN"},"v":\{"context":"GrpcProtocolProxy::IsConnected
 
(D:\\tc\\work\\t231\\g_pc\\modules\\connectors\\tabmixins\\main\\db\\GrpcProtocolProxy.cpp:456)","driver-name":"org.apache.flink.table.jdbc.FlinkDriver","driver-version":"1.19.0","error-code":"0","error-messages":["FlinkConnection#isValid
 is not supported 
yet."],"grpc-status-code":"2","protocol-id":3,"sql-state":"0"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35440) unable to connect tableau to jdbc flink url using flink sql driver

2024-05-23 Thread Zach (Jira)
Zach created FLINK-35440:


 Summary: unable to connect tableau to jdbc flink url using flink 
sql driver
 Key: FLINK-35440
 URL: https://issues.apache.org/jira/browse/FLINK-35440
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / JDBC
Affects Versions: 1.19.0, 1.20.0
Reporter: Zach


Tableau 2023.1 using 
[https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-jdbc-driver-bundle/]
 version 1.19.0 yields the following error when a connection is established to 
a local flink sql cluster using the uri 
{{{}jdbc:{}}}{{{}[flink://localhost:8083]{}}}

{{{"ts":"2024-05-23T14:21:05.858","pid":12172,"tid":"6a70","sev":"error","req":"-","sess":"-","site":"-","user":"-","k":"jdbc-error","e":\{"excp-error-code":"0xFAB9A2C5","excp-source":"NeedsClassification","excp-status-code":"UNKNOWN"},"v":\{"context":"GrpcProtocolProxy::IsConnected
 
(D:\\tc\\work\\t231\\g_pc\\modules\\connectors\\tabmixins\\main\\db\\GrpcProtocolProxy.cpp:456)","driver-name":"org.apache.flink.table.jdbc.FlinkDriver","driver-version":"1.19.0","error-code":"0","error-messages":["FlinkConnection#isValid
 is not supported 
yet."],"grpc-status-code":"2","protocol-id":3,"sql-state":"0"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Description: BaseErrorTableWriter is a public API class which should be 
kept the same as before,   (was: BaseErrorTableWriter is a public API class)

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class which should be kept the same as 
> before, 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Description: BaseErrorTableWriter is a public API class which should be 
kept the same as before.  (was: BaseErrorTableWriter is a public API class 
which should be kept the same as before, )

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class which should be kept the same as 
> before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Labels: hoodie-storage  (was: )

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
>  Labels: hoodie-storage
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class which should be kept the same as 
> before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Priority: Blocker  (was: Major)

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Blocker
>  Labels: hoodie-storage
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class which should be kept the same as 
> before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo reassigned HUDI-7785:
---

Assignee: Ethan Guo

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
>
> BaseErrorTableWriter is a public API class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Description: BaseErrorTableWriter is a public API class

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Priority: Major
>
> BaseErrorTableWriter is a public API class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-7785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Guo updated HUDI-7785:

Fix Version/s: 0.15.0
   1.0.0

> Keep the BaseErrorTableWriter APIs the same as before HoodieStorage 
> abstraction
> ---
>
> Key: HUDI-7785
> URL: https://issues.apache.org/jira/browse/HUDI-7785
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.15.0, 1.0.0
>
>
> BaseErrorTableWriter is a public API class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CASSANDRASC-132) Add restore job progress endpoint and consistency check on restore ranges

2024-05-23 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRASC-132:
--
Authors: Yifan Cai
Test and Documentation Plan: ci
 Status: Patch Available  (was: Open)

> Add restore job progress endpoint and consistency check on restore ranges
> -
>
> Key: CASSANDRASC-132
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-132
> Project: Sidecar for Apache Cassandra
>  Issue Type: New Feature
>  Components: Rest API
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Labels: pull-request-available
>
> In order to support the sidecar-managed restore jobs (sidecar counterpart of 
> Cassandra analytics bulk write via S3), it is required to have the capability 
> to perform consistency check for the individual restore ranges and have a new 
> endpoint for the spark job to query the restore progress.
> The consistency check should be responsive to cluster topology changes. For 
> example, if there is a new node joining the cluster, the write replica set of 
> the affected ranges are changed. The joining node should be able to discover 
> the restore ranges that it owns and restore the data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (HUDI-7785) Keep the BaseErrorTableWriter APIs the same as before HoodieStorage abstraction

2024-05-23 Thread Ethan Guo (Jira)
Ethan Guo created HUDI-7785:
---

 Summary: Keep the BaseErrorTableWriter APIs the same as before 
HoodieStorage abstraction
 Key: HUDI-7785
 URL: https://issues.apache.org/jira/browse/HUDI-7785
 Project: Apache Hudi
  Issue Type: Bug
Reporter: Ethan Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CASSANDRASC-132) Add restore job progress endpoint and consistency check on restore ranges

2024-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CASSANDRASC-132:
---
Labels: pull-request-available  (was: )

> Add restore job progress endpoint and consistency check on restore ranges
> -
>
> Key: CASSANDRASC-132
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-132
> Project: Sidecar for Apache Cassandra
>  Issue Type: New Feature
>  Components: Rest API
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Labels: pull-request-available
>
> In order to support the sidecar-managed restore jobs (sidecar counterpart of 
> Cassandra analytics bulk write via S3), it is required to have the capability 
> to perform consistency check for the individual restore ranges and have a new 
> endpoint for the spark job to query the restore progress.
> The consistency check should be responsive to cluster topology changes. For 
> example, if there is a new node joining the cluster, the write replica set of 
> the affected ranges are changed. The joining node should be able to discover 
> the restore ranges that it owns and restore the data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRASC-132) Add restore job progress endpoint and consistency check on restore ranges

2024-05-23 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRASC-132:
--
Change Category: Semantic
 Complexity: Normal
 Status: Open  (was: Triage Needed)

PR: https://github.com/apache/cassandra-sidecar/pull/123
CI: 
https://app.circleci.com/pipelines/github/yifan-c/cassandra-sidecar?branch=CASSANDRASC-132%2Ftrunk

> Add restore job progress endpoint and consistency check on restore ranges
> -
>
> Key: CASSANDRASC-132
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-132
> Project: Sidecar for Apache Cassandra
>  Issue Type: New Feature
>  Components: Rest API
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Labels: pull-request-available
>
> In order to support the sidecar-managed restore jobs (sidecar counterpart of 
> Cassandra analytics bulk write via S3), it is required to have the capability 
> to perform consistency check for the individual restore ranges and have a new 
> endpoint for the spark job to query the restore progress.
> The consistency check should be responsive to cluster topology changes. For 
> example, if there is a new node joining the cluster, the write replica set of 
> the affected ranges are changed. The joining node should be able to discover 
> the restore ranges that it owns and restore the data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRASC-132) Add restore job progress endpoint and consistency check on restore ranges

2024-05-23 Thread Yifan Cai (Jira)
Yifan Cai created CASSANDRASC-132:
-

 Summary: Add restore job progress endpoint and consistency check 
on restore ranges
 Key: CASSANDRASC-132
 URL: https://issues.apache.org/jira/browse/CASSANDRASC-132
 Project: Sidecar for Apache Cassandra
  Issue Type: New Feature
  Components: Rest API
Reporter: Yifan Cai
Assignee: Yifan Cai


In order to support the sidecar-managed restore jobs (sidecar counterpart of 
Cassandra analytics bulk write via S3), it is required to have the capability 
to perform consistency check for the individual restore ranges and have a new 
endpoint for the spark job to query the restore progress.
The consistency check should be responsive to cluster topology changes. For 
example, if there is a new node joining the cluster, the write replica set of 
the affected ranges are changed. The joining node should be able to discover 
the restore ranges that it owns and restore the data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (RATIS-2100) The `closeFuture` never completed while closing from the `NEW` state.

2024-05-23 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/RATIS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849129#comment-17849129
 ] 

Tsz-wo Sze commented on RATIS-2100:
---

It seems that we can fix it as below:
{code}
+++ 
b/ratis-server/src/main/java/org/apache/ratis/server/leader/LogAppenderDaemon.java
@@ -108,8 +108,11 @@ class LogAppenderDaemon {
   };
 
   public CompletableFuture tryToClose() {
-if (lifeCycle.transition(TRY_TO_CLOSE) == CLOSING) {
+final State state = lifeCycle.transition(TRY_TO_CLOSE);
+if (state == CLOSING) {
   daemon.interrupt();
+} else if (state == CLOSED) {
+  closeFuture.complete(CLOSED);
 }
 return closeFuture;
   }
{code}

> The `closeFuture` never completed while closing from the `NEW` state.
> -
>
> Key: RATIS-2100
> URL: https://issues.apache.org/jira/browse/RATIS-2100
> Project: Ratis
>  Issue Type: Bug
>Reporter: Chung En Lee
>Assignee: Chung En Lee
>Priority: Critical
>
> Currently, the {{closeFuture}} only completes after the {{LogAppenderDaemon}} 
> has started. However, when closing from the {{NEW}} state, the transition is 
> {{NEW}} -> {{{}CLOSED{}}}, and the {{LogAppenderDaemon}} was not started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HDDS-10750) Intermittent fork timeout while stopping Ratis server

2024-05-23 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849128#comment-17849128
 ] 

Tsz-wo Sze commented on HDDS-10750:
---

[~wfps1210], assigned it to you.  Thanks!

> Intermittent fork timeout while stopping Ratis server
> -
>
> Key: HDDS-10750
> URL: https://issues.apache.org/jira/browse/HDDS-10750
> Project: Apache Ozone
>  Issue Type: Sub-task
>Reporter: Attila Doroszlai
>Priority: Critical
> Attachments: 2024-04-21T16-53-06_683-jvmRun1.dump, 
> 2024-05-03T11-31-12_561-jvmRun1.dump, 
> org.apache.hadoop.fs.ozone.TestOzoneFileChecksum-output.txt, 
> org.apache.hadoop.hdds.scm.TestSCMInstallSnapshot-output.txt, 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy-output.txt,
>  org.apache.hadoop.ozone.container.TestECContainerRecovery-output-1.txt, 
> org.apache.hadoop.ozone.container.TestECContainerRecovery-output.txt, 
> org.apache.hadoop.ozone.om.TestOzoneManagerPrepare-output.txt
>
>
> {code:title=https://github.com/adoroszlai/ozone-build-results/blob/master/2024/04/21/30803/it-client/output.log}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy
> [INFO] 
> [INFO] Results:
> ...
> ... There was a timeout or other error in the fork
> {code}
> {code}
> "main" 
>java.lang.Thread.State: WAITING
> at java.lang.Object.wait(Native Method)
> at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
> ...
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanodes(MiniOzoneClusterImpl.java:473)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:414)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:400)
> at 
> org.apache.hadoop.ozone.client.rpc.AbstractTestECKeyOutputStream.shutdown(AbstractTestECKeyOutputStream.java:160)
> "ForkJoinPool.commonPool-worker-7" 
>java.lang.Thread.State: TIMED_WAITING
> ...
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:144)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:136)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$close$9(RaftServerProxy.java:438)
> ...
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:415)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:603)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:484)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:447)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:637)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:550)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanode(MiniOzoneClusterImpl.java:479)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$$Lambda$2077/645273703.accept(Unknown
>  Source)
> "c7edee5d-bf3c-45a7-a783-e11562f208dc-impl-thread2" 
>java.lang.Thread.State: WAITING
> ...
> at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.lambda$close$3(RaftServerImpl.java:543)
> at 
> org.apache.ratis.server.impl.RaftServerImpl$$Lambda$1925/263251010.run(Unknown
>  Source)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$7(LifeCycle.java:306)
> at org.apache.ratis.util.LifeCycle$$Lambda$1204/655954062.get(Unknown 
> Source)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:326)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.close(RaftServerImpl.java:525)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[jira] [Assigned] (RATIS-2100) The `closeFuture` never completed while closing from the `NEW` state.

2024-05-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/RATIS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze reassigned RATIS-2100:
-

Assignee: Chung En Lee

> The `closeFuture` never completed while closing from the `NEW` state.
> -
>
> Key: RATIS-2100
> URL: https://issues.apache.org/jira/browse/RATIS-2100
> Project: Ratis
>  Issue Type: Bug
>Reporter: Chung En Lee
>Assignee: Chung En Lee
>Priority: Critical
>
> Currently, the {{closeFuture}} only completes after the {{LogAppenderDaemon}} 
> has started. However, when closing from the {{NEW}} state, the transition is 
> {{NEW}} -> {{{}CLOSED{}}}, and the {{LogAppenderDaemon}} was not started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HADOOP-19184) TestStagingCommitter.testJobCommitFailure failing

2024-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849127#comment-17849127
 ] 

ASF GitHub Bot commented on HADOOP-19184:
-

hadoop-yetus commented on PR #6843:
URL: https://github.com/apache/hadoop/pull/6843#issuecomment-2128218733

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6843/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6843 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c82220bdc649 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2485212cbf4c0b188e674179935ed214dd144351 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6843/2/testReport/ |
   | Max. process+thread count | 706 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6843/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> TestStagingCommitter.testJobCommitFailure fail

[jira] [Updated] (RATIS-2100) The `closeFuture` never completed while closing from the `NEW` state.

2024-05-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/RATIS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated RATIS-2100:
--
Attachment: image.png

> The `closeFuture` never completed while closing from the `NEW` state.
> -
>
> Key: RATIS-2100
> URL: https://issues.apache.org/jira/browse/RATIS-2100
> Project: Ratis
>  Issue Type: Bug
>Reporter: Chung En Lee
>Priority: Critical
>
> Currently, the {{closeFuture}} only completes after the {{LogAppenderDaemon}} 
> has started. However, when closing from the {{NEW}} state, the transition is 
> {{NEW}} -> {{{}CLOSED{}}}, and the {{LogAppenderDaemon}} was not started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (RATIS-2100) The `closeFuture` never completed while closing from the `NEW` state.

2024-05-23 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/RATIS-2100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated RATIS-2100:
--
Attachment: (was: image.png)

> The `closeFuture` never completed while closing from the `NEW` state.
> -
>
> Key: RATIS-2100
> URL: https://issues.apache.org/jira/browse/RATIS-2100
> Project: Ratis
>  Issue Type: Bug
>Reporter: Chung En Lee
>Priority: Critical
>
> Currently, the {{closeFuture}} only completes after the {{LogAppenderDaemon}} 
> has started. However, when closing from the {{NEW}} state, the transition is 
> {{NEW}} -> {{{}CLOSED{}}}, and the {{LogAppenderDaemon}} was not started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (SPARK-33164) SPIP: add SQL support to "SELECT * (EXCEPT someColumn) FROM .." equivalent to DataSet.dropColumn(someColumn)

2024-05-23 Thread Jonathan Boarman (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849126#comment-17849126
 ] 

Jonathan Boarman commented on SPARK-33164:
--

There are significant benefits provided by the `{*}{{EXCEPT}}{*}` feature 
provided by most large data platforms, including Databricks, Snowflake, 
BigQuery, DuckDB, etc.  The list of vendors that support *{{EXCEPT}}* (or 
increasingly called `{*}{{EXCLUDE}}{*}` to avoid conflicts) is pretty long and 
growing.  As such, migrating projects from those platforms to a pure Spark SQL 
environment is extremely costly.

Further, the "risks" associated with `{*}{{SELECT *}}{*}` do not apply to all 
scenarios – very importantly, with CTEs these risks are not applicable since 
the constraints on column selection are generally made in the first CTE.

For example, any subsequent CTEs in a chain of CTEs inherits the field 
selection of the first CTEs.  On platforms that lack this feature, we have a 
different risk caused be crazy levels of duplication if we are forced to 
enumerate fields in each and every CTE.  This is particularly problematic when 
joining two CTEs that share a field, such as an `{*}{{ID}}{*}` column.  In that 
situation, the most efficient and risk-free approach is to `{*}{{SELECT * 
EXCEPT(right.id)}}{*}` from the join of its two dependent CTEs.

Any perceived judgment aside, this is a highly-relied-upon feature in 
enterprise environments that depend on these quality-of-life innovations.  
Clearly such improvements are providing value in those environments, and Spark 
SQL should not be any different in supporting its users that have come to rely 
on such innovations.

> SPIP: add SQL support to "SELECT * (EXCEPT someColumn) FROM .." equivalent to 
> DataSet.dropColumn(someColumn)
> 
>
> Key: SPARK-33164
>     URL: https://issues.apache.org/jira/browse/SPARK-33164
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.5, 2.4.6, 2.4.7, 3.0.0, 3.0.1
>Reporter: Arnaud Nauwynck
>Priority: Minor
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> *Q1.* What are you trying to do? Articulate your objectives using absolutely 
> no jargon.
> I would like to have the extended SQL syntax "SELECT * EXCEPT someColumn FROM 
> .." 
> to be able to select all columns except some in a SELECT clause.
> It would be similar to SQL syntax from some databases, like Google BigQuery 
> or PostgresQL.
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax
> Google question "select * EXCEPT one column", and you will see many 
> developpers have the same problems.
> example posts: 
> https://blog.jooq.org/2018/05/14/selecting-all-columns-except-one-in-postgresql/
> https://www.thetopsites.net/article/53001825.shtml
> There are several typicall examples where is is very helpfull :
> use-case1:
>  you add "count ( * )  countCol" column, and then filter on it using for 
> example "having countCol = 1" 
>   ... and then you want to select all columns EXCEPT this dummy column which 
> always is "1"
> {noformat}
>   select * (EXCEPT countCol)
>   from (  
>  select count(*) countCol, * 
>from MyTable 
>where ... 
>group by ... having countCol = 1
>   )
> {noformat}
>
> use-case 2:
>  same with analytical function "partition over(...) rankCol  ... where 
> rankCol=1"
>  For example to get the latest row before a given time, in a time series 
> table.
>  This is "Time-Travel" queries addressed by framework like "DeltaLake"
> {noformat}
>  CREATE table t_updates (update_time timestamp, id string, col1 type1, col2 
> type2, ... col42)
>  pastTime=..
>  SELECT * (except rankCol)
>  FROM (
>SELECT *,
>   RANK() OVER (PARTITION BY id ORDER BY update_time) rankCol   
>FROM t_updates
>where update_time < pastTime
>  ) WHERE rankCol = 1
>  
> {noformat}
>  
> use-case 3:
>  copy some data from table "t" to corresponding table "t_snapshot", and back 
> to "t"
> {noformat}
>CREATE TABLE t (col1 type1, col2 type2, col3 type3, ... col42 type42) ...
>
>/* create corresponding table: (snap_id string, col1 type1, col2 type2, 
> col3 type3, ... col42 type42) */
>CREATE TABLE t_snapshot
>AS SELECT '' as snap_id, * FROM t WHERE 1=2
>/* insert data from t to some snapshot */
>INSERT INTO t_snapshot
>SELECT 'snap1' 

[jira] [Commented] (HDDS-10750) Intermittent fork timeout while stopping Ratis server

2024-05-23 Thread Chung En Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849125#comment-17849125
 ] 

Chung En Lee commented on HDDS-10750:
-

[~szetszwo], I'll work on it. Could you assign RATIS-2100 to me, thanks.

> Intermittent fork timeout while stopping Ratis server
> -
>
> Key: HDDS-10750
> URL: https://issues.apache.org/jira/browse/HDDS-10750
> Project: Apache Ozone
>  Issue Type: Sub-task
>Reporter: Attila Doroszlai
>Priority: Critical
> Attachments: 2024-04-21T16-53-06_683-jvmRun1.dump, 
> 2024-05-03T11-31-12_561-jvmRun1.dump, 
> org.apache.hadoop.fs.ozone.TestOzoneFileChecksum-output.txt, 
> org.apache.hadoop.hdds.scm.TestSCMInstallSnapshot-output.txt, 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy-output.txt,
>  org.apache.hadoop.ozone.container.TestECContainerRecovery-output-1.txt, 
> org.apache.hadoop.ozone.container.TestECContainerRecovery-output.txt, 
> org.apache.hadoop.ozone.om.TestOzoneManagerPrepare-output.txt
>
>
> {code:title=https://github.com/adoroszlai/ozone-build-results/blob/master/2024/04/21/30803/it-client/output.log}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy
> [INFO] 
> [INFO] Results:
> ...
> ... There was a timeout or other error in the fork
> {code}
> {code}
> "main" 
>java.lang.Thread.State: WAITING
> at java.lang.Object.wait(Native Method)
> at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
> ...
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanodes(MiniOzoneClusterImpl.java:473)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:414)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:400)
> at 
> org.apache.hadoop.ozone.client.rpc.AbstractTestECKeyOutputStream.shutdown(AbstractTestECKeyOutputStream.java:160)
> "ForkJoinPool.commonPool-worker-7" 
>java.lang.Thread.State: TIMED_WAITING
> ...
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:144)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:136)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$close$9(RaftServerProxy.java:438)
> ...
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:415)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:603)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:484)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:447)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:637)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:550)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanode(MiniOzoneClusterImpl.java:479)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$$Lambda$2077/645273703.accept(Unknown
>  Source)
> "c7edee5d-bf3c-45a7-a783-e11562f208dc-impl-thread2" 
>java.lang.Thread.State: WAITING
> ...
> at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.lambda$close$3(RaftServerImpl.java:543)
> at 
> org.apache.ratis.server.impl.RaftServerImpl$$Lambda$1925/263251010.run(Unknown
>  Source)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$7(LifeCycle.java:306)
> at org.apache.ratis.util.LifeCycle$$Lambda$1204/655954062.get(Unknown 
> Source)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:326)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.close(RaftServerImpl.java:525)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[jira] [Updated] (KAFKA-16833) Cluster missing topicIds from equals and hashCode, PartitionInfo missing equals and hashCode

2024-05-23 Thread Alyssa Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alyssa Huang updated KAFKA-16833:
-
Summary: Cluster missing topicIds from equals and hashCode, PartitionInfo 
missing equals and hashCode  (was: PartitionInfo missing equals and hashCode 
methods )

> Cluster missing topicIds from equals and hashCode, PartitionInfo missing 
> equals and hashCode
> 
>
> Key: KAFKA-16833
> URL: https://issues.apache.org/jira/browse/KAFKA-16833
> Project: Kafka
>  Issue Type: Bug
>Reporter: Alyssa Huang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CXF-8828) Support Jakarta EE 11

2024-05-23 Thread Andriy Redko (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andriy Redko updated CXF-8828:
--
Description: 
Support Jakarta EE 11

Minimum JDK requirement - JDK-17

 

Jakarta Interceptors 2.2*

[Jakarta Validation 3.1 
(|https://jakarta.ee/specifications/bean-validation/3.1/]https://github.com/apache/cxf/pull/1889)

 

Updates required:

 - Tomcat 11 ([https://www.mail-archive.com/announce@apache.org/msg07789.html])

 - Arquillian Weld Container 4.x ([https://github.com/apache/cxf/pull/1621])

 - Apache ActiveMQ 6 ([https://activemq.apache.org/activemq-600-release])

  was:
Support Jakarta EE 11

Minimum JDK requirement - JDK-17

Jakarta Interceptors 2.2*

 

Updates required:

 - Tomcat 11 ([https://www.mail-archive.com/announce@apache.org/msg07789.html])

 - Arquillian Weld Container 4.x ([https://github.com/apache/cxf/pull/1621])

 - Apache ActiveMQ 6 ([https://activemq.apache.org/activemq-600-release])


> Support Jakarta EE 11
> -
>
> Key: CXF-8828
> URL: https://issues.apache.org/jira/browse/CXF-8828
> Project: CXF
>  Issue Type: Improvement
>Reporter: Andriy Redko
>Assignee: Andriy Redko
>Priority: Major
> Fix For: 4.2.0
>
>
> Support Jakarta EE 11
> Minimum JDK requirement - JDK-17
>  
> Jakarta Interceptors 2.2*
> [Jakarta Validation 3.1 
> (|https://jakarta.ee/specifications/bean-validation/3.1/]https://github.com/apache/cxf/pull/1889)
>  
> Updates required:
>  - Tomcat 11 
> ([https://www.mail-archive.com/announce@apache.org/msg07789.html])
>  - Arquillian Weld Container 4.x ([https://github.com/apache/cxf/pull/1621])
>  - Apache ActiveMQ 6 ([https://activemq.apache.org/activemq-600-release])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MDEP-799) improve mvn dependency:tree - add optional JSON output of the results

2024-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/MDEP-799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849124#comment-17849124
 ] 

ASF GitHub Bot commented on MDEP-799:
-

pombredanne commented on PR #391:
URL: 
https://github.com/apache/maven-dependency-plugin/pull/391#issuecomment-2128158079

   Everyone thank you ++ and @LogFlames :bow: :heart: 
   
   You have rendered obsolete about 22K files on GitHub that try to parse the 
output of tree!
   See https://github.com/search?q=mvn+"dependency%3Atree"=code 
   
   @LogFlames @monperrus I guess you plan to use it in 
https://github.com/chains-project/maven-lockfile ?
   
   FWIW, on my side this is going to be used in a front end to the 
https://github.com/nexB/scancode.io/ code scanner and matcher:
   - created for Maven in in 
https://github.com/nexB/dependency-inspector/issues/6
   - otherwise,  part of a general purpose solution to 
https://github.com/nexB/dependency-inspector/issues/2
   - and the companion to ecosystem-specific dependency resolvers such as 
https://github.com/nexB/python-inspector or 
https://github.com/nexB/nuget-inspector 




> improve mvn dependency:tree - add optional JSON output of the results
> -
>
> Key: MDEP-799
> URL: https://issues.apache.org/jira/browse/MDEP-799
> Project: Maven Dependency Plugin
>  Issue Type: New Feature
>  Components: tree
>Reporter: Zhenxu Ke
>Assignee: Elliotte Rusty Harold
>Priority: Major
> Fix For: 3.7.0
>
>
> I'd like to add an output type JSON, will open a pull request soon



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13290) Dialog close on route navigation causes extra selection route to fire and browser history to be removed

2024-05-23 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-13290:
---
Attachment: example2.gif

> Dialog close on route navigation causes extra selection route to fire and 
> browser history to be removed
> ---
>
> Key: NIFI-13290
> URL: https://issues.apache.org/jira/browse/NIFI-13290
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Priority: Major
> Attachments: Kapture 2024-05-22 at 17.24.17.gif, example2.gif
>
>
> https://github.com/apache/nifi/pull/8859#discussion_r1612046025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13290) Dialog close on route navigation causes extra selection route to fire and browser history to be removed

2024-05-23 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-13290:
---
Attachment: Kapture 2024-05-22 at 17.24.17.gif

> Dialog close on route navigation causes extra selection route to fire and 
> browser history to be removed
> ---
>
> Key: NIFI-13290
> URL: https://issues.apache.org/jira/browse/NIFI-13290
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Priority: Major
> Attachments: Kapture 2024-05-22 at 17.24.17.gif
>
>
> https://github.com/apache/nifi/pull/8859#discussion_r1612046025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13290) Dialog close on route navigation causes extra selection route to fire and browser history to be removed

2024-05-23 Thread Scott Aslan (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Aslan updated NIFI-13290:
---
Description: https://github.com/apache/nifi/pull/8859#discussion_r1612046025

> Dialog close on route navigation causes extra selection route to fire and 
> browser history to be removed
> ---
>
> Key: NIFI-13290
> URL: https://issues.apache.org/jira/browse/NIFI-13290
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Scott Aslan
>Priority: Major
>
> https://github.com/apache/nifi/pull/8859#discussion_r1612046025



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13290) Dialog close on route navigation causes extra selection route to fire and browser history to be removed

2024-05-23 Thread Scott Aslan (Jira)
Scott Aslan created NIFI-13290:
--

 Summary: Dialog close on route navigation causes extra selection 
route to fire and browser history to be removed
 Key: NIFI-13290
 URL: https://issues.apache.org/jira/browse/NIFI-13290
 Project: Apache NiFi
  Issue Type: Sub-task
Reporter: Scott Aslan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16833) PartitionInfo missing equals and hashCode methods

2024-05-23 Thread Alyssa Huang (Jira)
Alyssa Huang created KAFKA-16833:


 Summary: PartitionInfo missing equals and hashCode methods 
 Key: KAFKA-16833
 URL: https://issues.apache.org/jira/browse/KAFKA-16833
 Project: Kafka
  Issue Type: Bug
Reporter: Alyssa Huang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16833) PartitionInfo missing equals and hashCode methods

2024-05-23 Thread Alyssa Huang (Jira)
Alyssa Huang created KAFKA-16833:


 Summary: PartitionInfo missing equals and hashCode methods 
 Key: KAFKA-16833
 URL: https://issues.apache.org/jira/browse/KAFKA-16833
 Project: Kafka
  Issue Type: Bug
Reporter: Alyssa Huang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HDDS-10750) Intermittent fork timeout while stopping Ratis server

2024-05-23 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849123#comment-17849123
 ] 

Tsz-wo Sze commented on HDDS-10750:
---

[~wfps1210], thanks a lot for digging out the problem!  Are you going to submit 
a pull request?  If not, I can work on it.

> Intermittent fork timeout while stopping Ratis server
> -
>
> Key: HDDS-10750
> URL: https://issues.apache.org/jira/browse/HDDS-10750
> Project: Apache Ozone
>  Issue Type: Sub-task
>Reporter: Attila Doroszlai
>Priority: Critical
> Attachments: 2024-04-21T16-53-06_683-jvmRun1.dump, 
> 2024-05-03T11-31-12_561-jvmRun1.dump, 
> org.apache.hadoop.fs.ozone.TestOzoneFileChecksum-output.txt, 
> org.apache.hadoop.hdds.scm.TestSCMInstallSnapshot-output.txt, 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy-output.txt,
>  org.apache.hadoop.ozone.container.TestECContainerRecovery-output-1.txt, 
> org.apache.hadoop.ozone.container.TestECContainerRecovery-output.txt, 
> org.apache.hadoop.ozone.om.TestOzoneManagerPrepare-output.txt
>
>
> {code:title=https://github.com/adoroszlai/ozone-build-results/blob/master/2024/04/21/30803/it-client/output.log}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy
> [INFO] 
> [INFO] Results:
> ...
> ... There was a timeout or other error in the fork
> {code}
> {code}
> "main" 
>java.lang.Thread.State: WAITING
> at java.lang.Object.wait(Native Method)
> at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
> ...
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanodes(MiniOzoneClusterImpl.java:473)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:414)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:400)
> at 
> org.apache.hadoop.ozone.client.rpc.AbstractTestECKeyOutputStream.shutdown(AbstractTestECKeyOutputStream.java:160)
> "ForkJoinPool.commonPool-worker-7" 
>java.lang.Thread.State: TIMED_WAITING
> ...
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:144)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:136)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$close$9(RaftServerProxy.java:438)
> ...
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:415)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:603)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:484)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:447)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:637)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:550)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanode(MiniOzoneClusterImpl.java:479)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$$Lambda$2077/645273703.accept(Unknown
>  Source)
> "c7edee5d-bf3c-45a7-a783-e11562f208dc-impl-thread2" 
>java.lang.Thread.State: WAITING
> ...
> at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.lambda$close$3(RaftServerImpl.java:543)
> at 
> org.apache.ratis.server.impl.RaftServerImpl$$Lambda$1925/263251010.run(Unknown
>  Source)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$7(LifeCycle.java:306)
> at org.apache.ratis.util.LifeCycle$$Lambda$1204/655954062.get(Unknown 
> Source)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:326)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.close(RaftServerImpl.java:525)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[jira] [Commented] (HADOOP-19184) TestStagingCommitter.testJobCommitFailure failing

2024-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849122#comment-17849122
 ] 

ASF GitHub Bot commented on HADOOP-19184:
-

mukund-thakur commented on PR #6843:
URL: https://github.com/apache/hadoop/pull/6843#issuecomment-2128081264

   Tested using us-west-1 bucket. All good. 




> TestStagingCommitter.testJobCommitFailure failing 
> --
>
> Key: HADOOP-19184
> URL: https://issues.apache.org/jira/browse/HADOOP-19184
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Critical
>  Labels: pull-request-available
>
> {code:java}
> [INFO] 
> [ERROR] Failures: 
> [ERROR]   TestStagingCommitter.testJobCommitFailure:662 [Committed objects 
> compared to deleted paths 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase$ClientResults@1b4ab85{
>  requests=12, uploads=12, parts=12, tagsByUpload=12, commits=5, aborts=7, 
> deletes=0}] 
> Expecting:
>   
> <["s3a://bucket-name/output/path/r_0_0_0e1f4790-4d3f-4abb-ba98-2b39ec8b7566",
>     
> "s3a://bucket-name/output/path/r_0_0_92306fea-0219-4ba5-a2b6-091d95547c11",
>     
> "s3a://bucket-name/output/path/r_1_1_016c4a25-a1f7-4e01-918e-e24a32c7525f",
>     
> "s3a://bucket-name/output/path/r_0_0_b2698dab-5870-4bdb-98ab-0ef5832eca45",
>     
> "s3a://bucket-name/output/path/r_1_1_600b7e65-a7ff-4d07-b763-c4339a9164ad"]>
> to contain exactly in any order:
>   <[]>
> but the following elements were unexpected:
>   
> <["s3a://bucket-name/output/path/r_0_0_0e1f4790-4d3f-4abb-ba98-2b39ec8b7566",
>     
> "s3a://bucket-name/output/path/r_0_0_92306fea-0219-4ba5-a2b6-091d95547c11",
>     
> "s3a://bucket-name/output/path/r_1_1_016c4a25-a1f7-4e01-918e-e24a32c7525f",
>     
> "s3a://bucket-name/output/path/r_0_0_b2698dab-5870-4bdb-98ab-0ef5832eca45",{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (IMPALA-13102) Loading tables with illegal stats failed

2024-05-23 Thread Quanlong Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quanlong Huang resolved IMPALA-13102.
-
Fix Version/s: Impala 4.5.0
   Resolution: Fixed

> Loading tables with illegal stats failed
> 
>
> Key: IMPALA-13102
> URL: https://issues.apache.org/jira/browse/IMPALA-13102
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> When the table has illegal stats, e.g. numDVs=-100, Impala can't load the 
> table. So DROP STATS or DROP TABLE can't be perform on the table.
> {code:sql}
> [localhost:21050] default> drop stats alltypes_bak;
> Query: drop stats alltypes_bak
> ERROR: AnalysisException: Failed to load metadata for table: 'alltypes_bak'
> CAUSED BY: TableLoadingException: Failed to load metadata for table: 
> default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}{code}
> We should allow at least dropping the stats or dropping the table. So user 
> can use Impala to recover the stats.
> Stacktrace in the logs:
> {noformat}
> I0520 08:00:56.661746 17543 jni-util.cc:321] 
> 5343142d1173494f:44dcde8c] 
> org.apache.impala.common.AnalysisException: Failed to load metadata for 
> table: 'alltypes_bak'
> at 
> org.apache.impala.analysis.Analyzer.resolveTableRef(Analyzer.java:974)
> at 
> org.apache.impala.analysis.DropStatsStmt.analyze(DropStatsStmt.java:94)
> at 
> org.apache.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:551)
> at 
> org.apache.impala.analysis.AnalysisContext.analyzeAndAuthorize(AnalysisContext.java:498)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2542)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2224)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1985)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:175)
> Caused by: org.apache.impala.catalog.TableLoadingException: Failed to load 
> metadata for table: default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}
> at 
> org.apache.impala.catalog.IncompleteTable.loadFromThrift(IncompleteTable.java:162)
> at org.apache.impala.catalog.Table.fromThrift(Table.java:586)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addTable(ImpaladCatalog.java:479)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addCatalogObject(ImpaladCatalog.java:334)
> at 
> org.apache.impala.catalog.ImpaladCatalog.updateCatalog(ImpaladCatalog.java:262)
> at 
> org.apache.impala.service.FeCatalogManager$CatalogdImpl.updateCatalogCache(FeCatalogManager.java:114)
> at 
> org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:585)
> at 
> org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:196)
> at .: 
> org.apache.impala.catalog.TableLoadingException: Failed to load metadata for 
> table: default.alltypes_bak
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1318)
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1213)
> at org.apache.impala.catalog.TableLoader.load(TableLoader.java:145)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:251)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:247)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: java.lang.IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:512)
> at 
> org.apache.impala.catalog.ColumnStats.valid

[jira] [Resolved] (IMPALA-13102) Loading tables with illegal stats failed

2024-05-23 Thread Quanlong Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Quanlong Huang resolved IMPALA-13102.
-
Fix Version/s: Impala 4.5.0
   Resolution: Fixed

> Loading tables with illegal stats failed
> 
>
> Key: IMPALA-13102
> URL: https://issues.apache.org/jira/browse/IMPALA-13102
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.5.0
>
>
> When the table has illegal stats, e.g. numDVs=-100, Impala can't load the 
> table. So DROP STATS or DROP TABLE can't be perform on the table.
> {code:sql}
> [localhost:21050] default> drop stats alltypes_bak;
> Query: drop stats alltypes_bak
> ERROR: AnalysisException: Failed to load metadata for table: 'alltypes_bak'
> CAUSED BY: TableLoadingException: Failed to load metadata for table: 
> default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}{code}
> We should allow at least dropping the stats or dropping the table. So user 
> can use Impala to recover the stats.
> Stacktrace in the logs:
> {noformat}
> I0520 08:00:56.661746 17543 jni-util.cc:321] 
> 5343142d1173494f:44dcde8c] 
> org.apache.impala.common.AnalysisException: Failed to load metadata for 
> table: 'alltypes_bak'
> at 
> org.apache.impala.analysis.Analyzer.resolveTableRef(Analyzer.java:974)
> at 
> org.apache.impala.analysis.DropStatsStmt.analyze(DropStatsStmt.java:94)
> at 
> org.apache.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:551)
> at 
> org.apache.impala.analysis.AnalysisContext.analyzeAndAuthorize(AnalysisContext.java:498)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2542)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2224)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1985)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:175)
> Caused by: org.apache.impala.catalog.TableLoadingException: Failed to load 
> metadata for table: default.alltypes_bak
> CAUSED BY: IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}
> at 
> org.apache.impala.catalog.IncompleteTable.loadFromThrift(IncompleteTable.java:162)
> at org.apache.impala.catalog.Table.fromThrift(Table.java:586)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addTable(ImpaladCatalog.java:479)
> at 
> org.apache.impala.catalog.ImpaladCatalog.addCatalogObject(ImpaladCatalog.java:334)
> at 
> org.apache.impala.catalog.ImpaladCatalog.updateCatalog(ImpaladCatalog.java:262)
> at 
> org.apache.impala.service.FeCatalogManager$CatalogdImpl.updateCatalogCache(FeCatalogManager.java:114)
> at 
> org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:585)
> at 
> org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:196)
> at .: 
> org.apache.impala.catalog.TableLoadingException: Failed to load metadata for 
> table: default.alltypes_bak
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1318)
> at org.apache.impala.catalog.HdfsTable.load(HdfsTable.java:1213)
> at org.apache.impala.catalog.TableLoader.load(TableLoader.java:145)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:251)
> at 
> org.apache.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:247)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: java.lang.IllegalStateException: ColumnStats{avgSize_=4.0, 
> avgSerializedSize_=4.0, maxSize_=4, numDistinct_=-100, numNulls_=0, 
> numTrues=-1, numFalses=-1, lowValue=-1, highValue=-1}
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:512)
> at 
> org.apache.impala.catalog.ColumnStats.valid

[jira] [Updated] (CLI-321) Add and use a Converter interface and implementations without using BeanUtils

2024-05-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CLI-321:

Fix Version/s: 1.8.1
   (was: 1.8.0)

> Add and use a Converter interface and implementations without using BeanUtils 
> --
>
> Key: CLI-321
> URL: https://issues.apache.org/jira/browse/CLI-321
> Project: Commons CLI
>  Issue Type: Improvement
>  Components: Parser
>Affects Versions: 1.6.0
>Reporter: Claude Warren
>Assignee: Claude Warren
>Priority: Minor
> Fix For: 1.8.1
>
>
> The current TypeHandler implementation notes indicate that the 
> BeanUtils.Converters should be used to create instances of the various types. 
>  This issue is to complete the implementation of TypeHandler so that it uses 
> the BeanUtils.Converters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CLI-322) Allow minus for kebab-case options

2024-05-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CLI-322:

Fix Version/s: 1.8.1
   (was: 1.8.0)

> Allow minus for kebab-case options
> --
>
> Key: CLI-322
> URL: https://issues.apache.org/jira/browse/CLI-322
> Project: Commons CLI
>  Issue Type: New Feature
>  Components: Parser
>Affects Versions: 1.6.0
>Reporter: Claude Warren
>Assignee: Claude Warren
>Priority: Minor
> Fix For: 1.8.1
>
>
> Currently minus (“-“) is not allowed in option names,
> which makes common long options in kebab-case
> (like {{--is-not-allowed}}) impossible.
> This change is to allow it inside an option name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (SOLR-10654) Expose Metrics in Prometheus format DIRECTLY from Solr

2024-05-23 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10654:

Description: 
Expose metrics via a `wt=prometheus` response type.

Example scape_config in prometheus.yml:
{code:java}
scrape_configs:

  - job_name: 'solr'

metrics_path: '/solr/admin/metrics'

params:
  wt: ["prometheus"]

static_configs:
  - targets: ['localhost:8983']

{code}
Rationale for having this despite the "Prometheus Exporter".  They have 
different strengths and weaknesses.

  was:
Expose metrics via a `wt=prometheus` response type.

Example scape_config in prometheus.yml:

{code}
scrape_configs:

  - job_name: 'solr'

metrics_path: '/solr/admin/metrics'

params:
  wt: ["prometheus"]

static_configs:
  - targets: ['localhost:8983']

{code}


> Expose Metrics in Prometheus format DIRECTLY from Solr
> --
>
> Key: SOLR-10654
>     URL: https://issues.apache.org/jira/browse/SOLR-10654
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Keith Laban
>Priority: Major
> Attachments: prometheus_metrics.txt
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Expose metrics via a `wt=prometheus` response type.
> Example scape_config in prometheus.yml:
> {code:java}
> scrape_configs:
>   - job_name: 'solr'
> metrics_path: '/solr/admin/metrics'
> params:
>   wt: ["prometheus"]
> static_configs:
>   - targets: ['localhost:8983']
> {code}
> Rationale for having this despite the "Prometheus Exporter".  They have 
> different strengths and weaknesses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-10654) Expose Metrics in Prometheus format DIRECTLY from Solr

2024-05-23 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10654:

Description: 
Expose metrics via a `wt=prometheus` response type.

Example scape_config in prometheus.yml:
{code:java}
scrape_configs:

  - job_name: 'solr'

metrics_path: '/solr/admin/metrics'

params:
  wt: ["prometheus"]

static_configs:
  - targets: ['localhost:8983']

{code}
[Rationale|https://issues.apache.org/jira/browse/SOLR-11795?focusedCommentId=17261423=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17261423]
 for having this despite the "Prometheus Exporter".  They have different 
strengths and weaknesses.

  was:
Expose metrics via a `wt=prometheus` response type.

Example scape_config in prometheus.yml:
{code:java}
scrape_configs:

  - job_name: 'solr'

metrics_path: '/solr/admin/metrics'

params:
  wt: ["prometheus"]

static_configs:
  - targets: ['localhost:8983']

{code}
Rationale for having this despite the "Prometheus Exporter".  They have 
different strengths and weaknesses.


> Expose Metrics in Prometheus format DIRECTLY from Solr
> --
>
> Key: SOLR-10654
> URL: https://issues.apache.org/jira/browse/SOLR-10654
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Keith Laban
>Priority: Major
> Attachments: prometheus_metrics.txt
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Expose metrics via a `wt=prometheus` response type.
> Example scape_config in prometheus.yml:
> {code:java}
> scrape_configs:
>   - job_name: 'solr'
> metrics_path: '/solr/admin/metrics'
> params:
>   wt: ["prometheus"]
> static_configs:
>   - targets: ['localhost:8983']
> {code}
> [Rationale|https://issues.apache.org/jira/browse/SOLR-11795?focusedCommentId=17261423=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17261423]
>  for having this despite the "Prometheus Exporter".  They have different 
> strengths and weaknesses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (KAFKA-16828) RackAwareTaskAssignorTest failed

2024-05-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16828.

Fix Version/s: 3.8.0
   Resolution: Fixed

> RackAwareTaskAssignorTest failed
> 
>
> Key: KAFKA-16828
> URL: https://issues.apache.org/jira/browse/KAFKA-16828
> Project: Kafka
>  Issue Type: Test
>Reporter: Luke Chen
>Assignee: Kuan Po Tseng
>Priority: Major
> Fix For: 3.8.0
>
>
> Found in the latest trunk build.
> It fails many tests in `RackAwareTaskAssignorTest` suite.
>  
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15951/7/#showFailuresLink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16828) RackAwareTaskAssignorTest failed

2024-05-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16828.

Fix Version/s: 3.8.0
   Resolution: Fixed

> RackAwareTaskAssignorTest failed
> 
>
> Key: KAFKA-16828
> URL: https://issues.apache.org/jira/browse/KAFKA-16828
> Project: Kafka
>  Issue Type: Test
>Reporter: Luke Chen
>Assignee: Kuan Po Tseng
>Priority: Major
> Fix For: 3.8.0
>
>
> Found in the latest trunk build.
> It fails many tests in `RackAwareTaskAssignorTest` suite.
>  
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15951/7/#showFailuresLink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15630) Improve documentation of offset.lag.max

2024-05-23 Thread Ganesh Sadanala (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Sadanala reassigned KAFKA-15630:
---

Assignee: Ganesh Sadanala

> Improve documentation of offset.lag.max
> ---
>
> Key: KAFKA-15630
> URL: https://issues.apache.org/jira/browse/KAFKA-15630
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs, mirrormaker
>Reporter: Mickael Maison
>Assignee: Ganesh Sadanala
>Priority: Major
>  Labels: newbie
>
> It would be good to expand on the role of this configuration on offset 
> translation and mention that it can be set to a smaller value, or even 0, to 
> help in scenarios when records may not flow constantly.
> The documentation string is here: 
> [https://github.com/apache/kafka/blob/06739d5aa026e7db62ff0bd7da57e079cca35f07/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceConfig.java#L104]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (GROOVY-11370) STC: extension method cannot provide map property (read mode)

2024-05-23 Thread Eric Milles (Jira)


[ 
https://issues.apache.org/jira/browse/GROOVY-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844979#comment-17844979
 ] 

Eric Milles edited comment on GROOVY-11370 at 5/23/24 9:15 PM:
---

https://github.com/apache/groovy/commit/9d41af9df4c688ca2c91fa9283ad3462e0abc928
https://github.com/apache/groovy/commit/6e2b9471bc2092c4746677f22bc87eda70cbb253


was (Author: emilles):
https://github.com/apache/groovy/commit/9d41af9df4c688ca2c91fa9283ad3462e0abc928

> STC: extension method cannot provide map property (read mode)
> -
>
> Key: GROOVY-11370
> URL: https://issues.apache.org/jira/browse/GROOVY-11370
> Project: Groovy
>  Issue Type: Bug
>  Components: Static Type Checker
>Affects Versions: 3.0.21, 4.0.21
>Reporter: Eric Milles
>Assignee: Eric Milles
>Priority: Major
> Fix For: 3.0.22, 4.0.22
>
>
> Consider the following:
> {code:groovy}
> @TypeChecked
> void test() {
>   def map = [:]
>   print map.metaClass
> }
> test()
> {code}
> The script prints "null" (before Groovy 5) indicating that "getMetaClass()" 
> extension method is not used.  However, node metadata indicates that the 
> extension method is used.  For example, adding "Number n = map.metaClass" 
> says: "Cannot assign value of type groovy.lang.MetaClass to variable of type 
> java.lang.Number"
> GROOVY-5001, GROOVY-5491, GROOVY-5568, GROOVY-9115, GROOVY-9123



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-13289) Add tooltip to NewCanvas item

2024-05-23 Thread Shane O'Neill (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane O'Neill reassigned NIFI-13289:


Assignee: Shane O'Neill

> Add tooltip to NewCanvas item
> -
>
> Key: NIFI-13289
> URL: https://issues.apache.org/jira/browse/NIFI-13289
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Core UI
>Reporter: Matt Gilman
>Assignee: Shane O'Neill
>Priority: Major
> Attachments: Screenshot 2024-05-23 at 2.36.55 PM.png
>
>
> Old NiFi UI had tooltips for the new canvas items in the top bar. These are 
> currently missing in the new UI.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16530) Fix high-watermark calculation to not assume the leader is in the voter set

2024-05-23 Thread Alyssa Huang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849121#comment-17849121
 ] 

Alyssa Huang commented on KAFKA-16530:
--

In the case the leader is removed from the voter set, and tries to update its 
log end offset (`updateLocalState`) because of a new removeNode record for 
instance, it will first update its own ReplicaState (`getOrCreateReplicaState`) 
which will return a _new_ Observer state if its id is no longer in the 
`voterStates` map. The endOffset will be updated, and then we'll consider if 
the high watermark can be updated (`maybeUpdateHighWatermark`). 
When updating the high watermark, we only look at the `voterStates` map, which 
means we won't count the leader's offset as part of the HW calculation. This 
_does_ mean it's possible for the HW to drop though. Here's a scenario:


{code:java}
# Before node 1 removal, voterStates contains Nodes 1, 2, 3
Node 1: Leader, LEO 100
Node 2: Follower, LEO 90 <- HW
Node 3: Follower, LEO 85

# Leader processes removeNode record, voterStates contains Nodes 2, 3
Node 1: Leader, LEO 101
Node 2: Follower, LEO 90
Node 3: Follower, LEO 85 <- new HW{code}

We want to make sure the HW does not decrement in this scenario. Perhaps we 
could revise `maybeUpdateHighWatermark` to continue to factor in the Leader's 
offset into the HW calculation, regardless of if it is in the voter set or not.
e.g.
{code:java}
  private boolean maybeUpdateHighWatermark() {
    // Find the largest offset which is replicated to a majority of replicas 
(the leader counts)
-   List followersByDescendingFetchOffset = 
followersByDescendingFetchOffset();
+   List followersAndLeaderByDescFetchOffset = 
followersAndLeadersByDescFetchOffset();

-   int indexOfHw = voterStates.size() / 2;
+   int indexOfHw = followersByDescendingFetchOffset.size() / 2;
    Optional highWatermarkUpdateOpt = 
followersByDescendingFetchOffset.get(indexOfHw).endOffset;{code}

However, this does not cover the case when a follower is being removed from the 
voter set.

{code:java}
# Before node 2 removal, voterStates contains Nodes 1, 2, 3
Node 1: Leader, LEO 100
Node 2: Follower, LEO 90 <- HW
Node 3: Follower, LEO 85

# Leader processes removeNode record, voterStates contains Nodes 1, 3
Node 1: Leader, LEO 101
Node 2: Follower, LEO 90
Node 3: Follower, LEO 85 <- new HW{code}

> Fix high-watermark calculation to not assume the leader is in the voter set
> ---
>
> Key: KAFKA-16530
> URL: https://issues.apache.org/jira/browse/KAFKA-16530
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kraft
>Reporter: José Armando García Sancio
>Assignee: Alyssa Huang
>Priority: Major
> Fix For: 3.8.0
>
>
> When the leader is being removed from the voter set, the leader may not be in 
> the voter set. This means that kraft should not assume that the leader is 
> part of the high-watermark calculation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (LOGGING-192) NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader

2024-05-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/LOGGING-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849119#comment-17849119
 ] 

Gary D. Gregory commented on LOGGING-192:
-

CC [~pkarwasz]

> NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using 
> custom classloader
> --
>
> Key: LOGGING-192
> URL: https://issues.apache.org/jira/browse/LOGGING-192
> Project: Commons Logging
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.3.1, 1.3.2
> Environment: This behavior was observed while running Adopt Open JDK 
> 11 and the latest version of Tomcat 9.  The behavior can be reproduced 
> outside of tomcat (see attached reproduction case).
>Reporter: Dave Dority
>Priority: Major
> Attachments: commons-logging-classloading-issue.zip
>
>
> If you have:
>  * A web application running in Tomcat which contains commons-logging:1.2
>  * That web application contains a custom classloader for loading a 
> seperately distributed software component (whose dependencies will conflict 
> with the dependencies of the web application).
>  * The software component uses commons-logging:1.3.2
> When the web application attempts use software component, the code 
> [here|https://github.com/apache/commons-logging/blob/rel/commons-logging-1.3.2/src/main/java/org/apache/commons/logging/LogFactory.java#L918-L938]
>  looks for the presence of different logging implementation classes on the 
> thread context classloader's (TCCL) classpath to select an optimal 
> implementation.  It seems like what is happening is that the LogFactory class 
> looking for implementation class on the TCCL's classpath and the trying to 
> load the selected factory from the web application's custom classloader (the 
> loader for the instance of LogFactory that is running).  This is the result:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/spi/LoggerAdapter
>         at java.base/java.lang.Class.forName0(Native Method)
>         at java.base/java.lang.Class.forName(Class.java:315)
>         at 
> org.apache.commons.logging.LogFactory.createFactory(LogFactory.java:419)
>         at 
> org.apache.commons.logging.LogFactory.lambda$newFactory$3(LogFactory.java:1431)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at 
> org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:1431)
>         at 
> org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:928)
>         at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:987)
>         at 
> org.component.ClassLoadedComponent.(ClassLoadedComponent.java:7)
>         at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>         at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>         at java.base/java.lang.Class.newInstance(Class.java:584){code}
> This occurs when the web application has commons-logging:1.2 and the software 
> component has commons-logging:1.3.x.  This does not occur when both are using 
> version 1.2. 
> Unfortunately, changing the web application's version of commons-logging is 
> outside is not something I can influence.
> An isolated reproduction case is attached.  It requires Java 11.  To run it:
>  * Unzip it to a directory.
>  * Run 
> {code:java}
> ./gradlew reproduceIssue{code}
>   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (LOGGING-192) NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader

2024-05-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/LOGGING-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated LOGGING-192:

Fix Version/s: (was: 2.0)
   (was: 1.3.3)

> NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using 
> custom classloader
> --
>
> Key: LOGGING-192
> URL: https://issues.apache.org/jira/browse/LOGGING-192
> Project: Commons Logging
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.3.1, 1.3.2
> Environment: This behavior was observed while running Adopt Open JDK 
> 11 and the latest version of Tomcat 9.  The behavior can be reproduced 
> outside of tomcat (see attached reproduction case).
>Reporter: Dave Dority
>Priority: Major
> Attachments: commons-logging-classloading-issue.zip
>
>
> If you have:
>  * A web application running in Tomcat which contains commons-logging:1.2
>  * That web application contains a custom classloader for loading a 
> seperately distributed software component (whose dependencies will conflict 
> with the dependencies of the web application).
>  * The software component uses commons-logging:1.3.2
> When the web application attempts use software component, the code 
> [here|https://github.com/apache/commons-logging/blob/rel/commons-logging-1.3.2/src/main/java/org/apache/commons/logging/LogFactory.java#L918-L938]
>  looks for the presence of different logging implementation classes on the 
> thread context classloader's (TCCL) classpath to select an optimal 
> implementation.  It seems like what is happening is that the LogFactory class 
> looking for implementation class on the TCCL's classpath and the trying to 
> load the selected factory from the web application's custom classloader (the 
> loader for the instance of LogFactory that is running).  This is the result:
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/spi/LoggerAdapter
>         at java.base/java.lang.Class.forName0(Native Method)
>         at java.base/java.lang.Class.forName(Class.java:315)
>         at 
> org.apache.commons.logging.LogFactory.createFactory(LogFactory.java:419)
>         at 
> org.apache.commons.logging.LogFactory.lambda$newFactory$3(LogFactory.java:1431)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at 
> org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:1431)
>         at 
> org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:928)
>         at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:987)
>         at 
> org.component.ClassLoadedComponent.(ClassLoadedComponent.java:7)
>         at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>         at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>         at java.base/java.lang.Class.newInstance(Class.java:584){code}
> This occurs when the web application has commons-logging:1.2 and the software 
> component has commons-logging:1.3.x.  This does not occur when both are using 
> version 1.2. 
> Unfortunately, changing the web application's version of commons-logging is 
> outside is not something I can influence.
> An isolated reproduction case is attached.  It requires Java 11.  To run it:
>  * Unzip it to a directory.
>  * Run 
> {code:java}
> ./gradlew reproduceIssue{code}
>   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35424) Elasticsearch connector 8 supports SSL context

2024-05-23 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated FLINK-35424:
--
Parent: FLINK-34369
Issue Type: Sub-task  (was: Improvement)

> Elasticsearch connector 8 supports SSL context
> --
>
> Key: FLINK-35424
> URL: https://issues.apache.org/jira/browse/FLINK-35424
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> In  FLINK-34369, we added SSL support for the base Elasticsearch sink class 
> that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is 
> using the AsyncSink API and it does not use the aforementioned base sink 
> class. It needs separate change to support this feature.
> This is specially important to Elasticsearch 8 which enables secure by 
> default. Meanwhile, it merits if we add integration tests for this SSL 
> context support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16516) Fix the controller node provider for broker to control channel

2024-05-23 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe reassigned KAFKA-16516:


Assignee: Colin McCabe  (was: José Armando García Sancio)

> Fix the controller node provider for broker to control channel
> --
>
> Key: KAFKA-16516
> URL: https://issues.apache.org/jira/browse/KAFKA-16516
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: José Armando García Sancio
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.8.0
>
>
> The broker to controller channel gets the set of voters directly from the 
> static configuration. This needs to change so that the leader nodes comes 
> from the kraft client/manager.
> The code is in KafkaServer where it construct the RaftControllerNodeProvider.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (LOGGING-192) NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader

2024-05-23 Thread Dave Dority (Jira)
Dave Dority created LOGGING-192:
---

 Summary: NoClassDefFoundError: 
org/apache/logging/log4j/spi/LoggerAdapter when using custom classloader
 Key: LOGGING-192
 URL: https://issues.apache.org/jira/browse/LOGGING-192
 Project: Commons Logging
  Issue Type: Bug
Affects Versions: 1.3.2, 1.3.1, 1.3.0
 Environment: This behavior was observed while running Adopt Open JDK 
11 and the latest version of Tomcat 9.  The behavior can be reproduced outside 
of tomcat (see attached reproduction case).
Reporter: Dave Dority
 Fix For: 2.0, 1.3.3
 Attachments: commons-logging-classloading-issue.zip

If you have:
 * A web application running in Tomcat which contains commons-logging:1.2
 * That web application contains a custom classloader for loading a seperately 
distributed software component (whose dependencies will conflict with the 
dependencies of the web application).
 * The software component uses commons-logging:1.3.2

When the web application attempts use software component, the code 
[here|https://github.com/apache/commons-logging/blob/rel/commons-logging-1.3.2/src/main/java/org/apache/commons/logging/LogFactory.java#L918-L938]
 looks for the presence of different logging implementation classes on the 
thread context classloader's (TCCL) classpath to select an optimal 
implementation.  It seems like what is happening is that the LogFactory class 
looking for implementation class on the TCCL's classpath and the trying to load 
the selected factory from the web application's custom classloader (the loader 
for the instance of LogFactory that is running).  This is the result:


{code:java}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/logging/log4j/spi/LoggerAdapter
        at java.base/java.lang.Class.forName0(Native Method)
        at java.base/java.lang.Class.forName(Class.java:315)
        at 
org.apache.commons.logging.LogFactory.createFactory(LogFactory.java:419)
        at 
org.apache.commons.logging.LogFactory.lambda$newFactory$3(LogFactory.java:1431)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at 
org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:1431)
        at org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:928)
        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:987)
        at 
org.component.ClassLoadedComponent.(ClassLoadedComponent.java:7)
        at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
        at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at 
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
        at java.base/java.lang.Class.newInstance(Class.java:584){code}

This occurs when the web application has commons-logging:1.2 and the software 
component has commons-logging:1.3.x.  This does not occur when both are using 
version 1.2. 

Unfortunately, changing the web application's version of commons-logging is 
outside is not something I can influence.

An isolated reproduction case is attached.  It requires Java 11.  To run it:
 * Unzip it to a directory.
 * Run 
{code:java}
./gradlew reproduceIssue{code}
  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (OAK-10831) Look at incorporating the following gists in Oak Run

2024-05-23 Thread Patrique Legault (Jira)
Patrique Legault created OAK-10831:
--

 Summary: Look at incorporating the following gists in Oak Run 
 Key: OAK-10831
 URL: https://issues.apache.org/jira/browse/OAK-10831
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: oak-run
Reporter: Patrique Legault


The following scripts are used to help fix inconsistencies in the repository 
[1] / [2]. To help streamline and manage a consistency repository check these 
should be included in oak-run as a groovy script. 

 

This will prevent dependencies on third party scripts and allow for proper 
management of the scripts

 

[1]

[https://gist.githubusercontent.com/stillalex/e7067bcb86c89bef66c8/raw/d7a5a9b839c3bb0ae5840252022f871fd38374d3/childCount.groovy]
 

 

[2]

[https://gist.githubusercontent.com/stillalex/43c49af065e3dd1fd5bf/raw/9e726a59f75b46e7b474f7ac763b0888d5a3f0c3/rmNode.groovy]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (TIKA-4260) Add parse context to the fetcher interface in 3.x

2024-05-23 Thread Tim Allison (Jira)
Tim Allison created TIKA-4260:
-

 Summary: Add parse context to the fetcher interface in 3.x
 Key: TIKA-4260
 URL: https://issues.apache.org/jira/browse/TIKA-4260
 Project: Tika
  Issue Type: Task
Reporter: Tim Allison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JCR-5065) Look at incorporating the following gists in Oak Run

2024-05-23 Thread Patrique Legault (Jira)


 [ 
https://issues.apache.org/jira/browse/JCR-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrique Legault resolved JCR-5065.
---
Resolution: Invalid

> Look at incorporating the following gists in Oak Run 
> -
>
> Key: JCR-5065
> URL: https://issues.apache.org/jira/browse/JCR-5065
> Project: Jackrabbit Content Repository
>  Issue Type: Improvement
>  Components: core
>Reporter: Patrique Legault
>Priority: Major
>
> The following scripts are used to help fix inconsistencies in the repository 
> [1] / [2]. To help streamline and manage a consistency repository check these 
> should be included in oak-run as a groovy script. 
>  
> This will prevent dependencies on third party scripts and allow for proper 
> management of the scripts
>  
> [1]
> [https://gist.githubusercontent.com/stillalex/e7067bcb86c89bef66c8/raw/d7a5a9b839c3bb0ae5840252022f871fd38374d3/childCount.groovy]
>  
>  
> [2]
> [https://gist.githubusercontent.com/stillalex/43c49af065e3dd1fd5bf/raw/9e726a59f75b46e7b474f7ac763b0888d5a3f0c3/rmNode.groovy]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (JCR-5065) Look at incorporating the following gists in Oak Run

2024-05-23 Thread Patrique Legault (Jira)
Patrique Legault created JCR-5065:
-

 Summary: Look at incorporating the following gists in Oak Run 
 Key: JCR-5065
 URL: https://issues.apache.org/jira/browse/JCR-5065
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: core
Reporter: Patrique Legault


The following scripts are used to help fix inconsistencies in the repository 
[1] / [2]. To help streamline and manage a consistency repository check these 
should be included in oak-run as a groovy script. 

 

This will prevent dependencies on third party scripts and allow for proper 
management of the scripts

 

[1]

[https://gist.githubusercontent.com/stillalex/e7067bcb86c89bef66c8/raw/d7a5a9b839c3bb0ae5840252022f871fd38374d3/childCount.groovy]
 

 

[2]

[https://gist.githubusercontent.com/stillalex/43c49af065e3dd1fd5bf/raw/9e726a59f75b46e7b474f7ac763b0888d5a3f0c3/rmNode.groovy]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21225) Redundant lambda object allocation in ClockPageReplacementFlags#setFlag

2024-05-23 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov resolved IGNITE-21225.

Fix Version/s: 2.17
 Release Note: Fixed redundant lambda object allocation in 
ClockPageReplacementFlags#
   Resolution: Fixed

[~timonin.maksim], thanks for the review! Merged to master.

> Redundant lambda object allocation in ClockPageReplacementFlags#setFlag
> ---
>
> Key: IGNITE-21225
> URL: https://issues.apache.org/jira/browse/IGNITE-21225
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Every time we call {{ClockPageReplacementFlags#setFlag/clearFlag}} methods 
> the new lambda object is created, since lambda is accessing the variable in 
> enclosing scope. \{{ClockPageReplacementFlags#setFlag}} method called every 
> time when page is modified, so, it's a relatevily hot method and we should 
> avoid new object allocation here. 
> Here is the test to show redundant allocations: 
>  
> {code:java}
> /** */
> @Test
> public void testAllocation() {
> clockFlags = new ClockPageReplacementFlags(MAX_PAGES_CNT, 
> region.address());
> int cnt = 1_000_000;
> ThreadMXBean bean = (ThreadMXBean)ManagementFactory.getThreadMXBean();
> // Warmup.
> clockFlags.setFlag(0);
> long allocated0 = 
> bean.getThreadAllocatedBytes(Thread.currentThread().getId());
> for (int i = 0; i < cnt; i++)
> clockFlags.setFlag(i % MAX_PAGES_CNT);
> long allocated1 = 
> bean.getThreadAllocatedBytes(Thread.currentThread().getId());
> assertTrue("Too many bytes allocated: " + (allocated1 - allocated0), 
> allocated1 - allocated0 < cnt);
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16832) LeaveGroup API for upgrading ConsumerGroup

2024-05-23 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-16832:
---

 Summary: LeaveGroup API for upgrading ConsumerGroup
 Key: KAFKA-16832
 URL: https://issues.apache.org/jira/browse/KAFKA-16832
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16832) LeaveGroup API for upgrading ConsumerGroup

2024-05-23 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-16832:
---

 Summary: LeaveGroup API for upgrading ConsumerGroup
 Key: KAFKA-16832
 URL: https://issues.apache.org/jira/browse/KAFKA-16832
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dongnuo Lyu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16832) LeaveGroup API for upgrading ConsumerGroup

2024-05-23 Thread Dongnuo Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongnuo Lyu reassigned KAFKA-16832:
---

Assignee: Dongnuo Lyu

> LeaveGroup API for upgrading ConsumerGroup
> --
>
> Key: KAFKA-16832
> URL: https://issues.apache.org/jira/browse/KAFKA-16832
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TIKA-4259) Decouple xml parser stuff from ParseContext

2024-05-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/TIKA-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849117#comment-17849117
 ] 

ASF GitHub Bot commented on TIKA-4259:
--

tballison opened a new pull request, #1775:
URL: https://github.com/apache/tika/pull/1775

   
   
   Thanks for your contribution to [Apache Tika](https://tika.apache.org/)! 
Your help is appreciated!
   
   Before opening the pull request, please verify that
   * there is an open issue on the [Tika issue 
tracker](https://issues.apache.org/jira/projects/TIKA) which describes the 
problem or the improvement. We cannot accept pull requests without an issue 
because the change wouldn't be listed in the release notes.
   * the issue ID (`TIKA-`)
 - is referenced in the title of the pull request
 - and placed in front of your commit messages surrounded by square 
brackets (`[TIKA-] Issue or pull request title`)
   * commits are squashed into a single one (or few commits for larger changes)
   * Tika is successfully built and unit tests pass by running `mvn clean test`
   * there should be no conflicts when merging the pull request branch into the 
*recent* `main` branch. If there are conflicts, please try to rebase the pull 
request branch on top of a freshly pulled `main` branch
   * if you add new module that downstream users will depend upon add it to 
relevant group in `tika-bom/pom.xml`.
   
   We will be able to faster integrate your pull request if these conditions 
are met. If you have any questions how to fix your problem or about using Tika 
in general, please sign up for the [Tika mailing 
list](http://tika.apache.org/mail-lists.html). Thanks!
   




> Decouple xml parser stuff from ParseContext
> ---
>
> Key: TIKA-4259
> URL: https://issues.apache.org/jira/browse/TIKA-4259
> Project: Tika
>  Issue Type: Task
>Reporter: Tim Allison
>Priority: Trivial
>
> ParseContext has some xmlreader convenience methods. We should move those to 
> XMLReaderUtils in 3.x to simplify ParseContext's api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21823) fix log message pageSize

2024-05-23 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-21823:
---
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> fix log message pageSize
> 
>
> Key: IGNITE-21823
> URL: https://issues.apache.org/jira/browse/IGNITE-21823
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Andrei Nadyktov
>Priority: Minor
>  Labels: ise, newbie
> Fix For: 2.17
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If you do not indicate in the configuration, the size of pageSize, then in 
> the log we see the message that pageSize = 0 that is not true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9504) activemq multikahadb persistence adapter with topic wildcard filtered adapter and per destination filtered adapter causes broker failure on restart

2024-05-23 Thread Christopher L. Shannon (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849115#comment-17849115
 ] 

Christopher L. Shannon commented on AMQ-9504:
-

Yeah, it would be a problem because it's trying to create duplicate adapters 
touching the same directory, so it's definitely not going to work right. 
Turning off JMX as you have shown just pushes the problem downstream if you 
configure things that way and will cause things to break. The vast majority of 
people do not use multiKahaDB and if they do obviously don't configure it this 
way (most people that use the option to create a store per destination set up 
their config with that as the only filter) otherwise it would have been 
reported way before now.

Is there a reason why you can't build a custom version with the patch 
temporarily? One of the benefits of using open source like this is you can do 
whatever you want, you can build your own version at any point. I know you want 
to use an official release but building your own version with the fix is the 
fastest way for now.

Our plan is to do a 6.2.0 release in the next couple of weeks, after that we 
could look at doing a 5.18.5 release which would include this fix, but again, 
I'm not really sure on an exact timeline so if it's that high of a priority you 
will need to build your own version temporarily until the new version is out.

> activemq multikahadb persistence adapter with topic wildcard filtered adapter 
> and per destination filtered adapter causes broker failure on restart
> ---
>
> Key: AMQ-9504
> URL: https://issues.apache.org/jira/browse/AMQ-9504
> Project: ActiveMQ Classic
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.18.4, 6.1.2
>Reporter: ritesh adval
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 6.2.0, 5.18.5, 6.1.3
>
> Attachments: bugfix.patch, test.patch
>
>
> When using Multi KahaDB persistence adapter according to [the 
> documentation|https://activemq.apache.org/components/classic/documentation/kahadb]
>  it shows that you can use multiple {{filteredPersistenceAdapters}} but this 
> does not work if you have two filtered adapter where one is using wildcard 
> match for topics (or even a specific topic) and second filtered adapter using 
> per destination filtered adapter.
> The idea being you want to use one KahaDB instance for all the topics and per 
> destination KahaDB instance for all other destinations like queues. Something 
> like this for illustration of the issue see test for more details. (note JMX 
> needs to be enabled) :  
> {code:xml}
> 
>     
>         
>             
>                 
>                 
>                     
>                         
>                     
>                     
>                         
>                     
>                 
>                 
>                 
>                     
>                         
>                     
>                 
>             
>         
>     
>  {code}
> With this setting it works for the first time when broker is started. But as 
> soon as you have atleast one topic created which uses wild card filtered 
> adapter and you restart the broker, then what happens is there are two 
> KahaDBPersistenceAdapter created one by the wildcard (">") topic filtered 
> adapter and another one by the second per destination filtered adapter, and 
> so second KahaDBPersistenceAdapter fails with below exception:
> {noformat}
> [INFO] Running org.apache.activemq.bugs.MultiKahaDBMultipleFilteredAdapterTest
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.20 
> s <<< FAILURE! – in 
> org.apache.activemq.bugs.MultiKahaDBMultipleFilteredAdapterTest
> [ERROR] 
> org.apache.activemq.bugs.MultiKahaDBMultipleFilteredAdapterTest.testTopicWildcardAndPerDestinationFilteredAdapter
>  – Time elapsed: 11.08 s <<< ERROR!
> javax.management.InstanceAlreadyExistsException: 
> org.apache.activemq:type=Broker,brokerName=localhost,service=PersistenceAdapter,instanceName=KahaDBPersistenceAdapter[/mnt/c/Users/ritesh.adval/work/external-repos/activemq/activemq-unit-tests/target/activemq-data/mKahaDB/topic#3a#2f#2f#3e_Index_/mnt/c/Users/ritesh.adval/work/external-repos/activemq/activemq-unit-tests/target/activemq-data/mKahaDB/topic#3a#2f#2f#3e|#3a#2f#2f#3e_Index_/mnt/c/Users/ritesh.adval/work/external-repos/activemq/activemq-unit-tests/target/activemq-da

[jira] [Created] (TIKA-4259) Decouple xml parser stuff from ParseContext

2024-05-23 Thread Tim Allison (Jira)
Tim Allison created TIKA-4259:
-

 Summary: Decouple xml parser stuff from ParseContext
 Key: TIKA-4259
 URL: https://issues.apache.org/jira/browse/TIKA-4259
 Project: Tika
  Issue Type: Task
Reporter: Tim Allison


ParseContext has some xmlreader convenience methods. We should move those to 
XMLReaderUtils in 3.x to simplify ParseContext's api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (GOBBLIN-2066) Add Dataset level metrics in Temporal

2024-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/GOBBLIN-2066?focusedWorklogId=920732=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-920732
 ]

ASF GitHub Bot logged work on GOBBLIN-2066:
---

Author: ASF GitHub Bot
Created on: 23/May/24 20:40
Start Date: 23/May/24 20:40
Worklog Time Spent: 10m 
  Work Description: Will-Lo merged PR #3912:
URL: https://github.com/apache/gobblin/pull/3912




Issue Time Tracking
---

Worklog Id: (was: 920732)
Time Spent: 2h  (was: 1h 50m)

> Add Dataset level metrics in Temporal
> -
>
> Key: GOBBLIN-2066
> URL: https://issues.apache.org/jira/browse/GOBBLIN-2066
> Project: Apache Gobblin
>  Issue Type: Improvement
>Reporter: William Lo
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Temporal workflows can have added observability metrics so that workflows can 
> be easily understood at-a-glance which is an improvement over the current 
> Gobblin system.
> We want to provide the following:
> 1. Emit dataset-level metrics on job metadata as a GobblinTrackingEvent (to 
> reach feature parity with existing Gobblin)
> 2. Enhance return types on Temporal so that users and service operators can 
> easily view metadata on jobs being run, so that it becomes obvious when a job 
> actually commits work and to which datasets without checking the logs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TIKA-4243) tika configuration overhaul

2024-05-23 Thread Tim Allison (Jira)


[ 
https://issues.apache.org/jira/browse/TIKA-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849114#comment-17849114
 ] 

Tim Allison commented on TIKA-4243:
---

I'm going to start working on PRs that will be generally helpful for the above, 
but they'll still be useful if we all choose a different direction. I'll hold 
off on the core work for a bit in case there are objections or better ways 
forward.

> tika configuration overhaul
> ---
>
> Key: TIKA-4243
> URL: https://issues.apache.org/jira/browse/TIKA-4243
> Project: Tika
>  Issue Type: New Feature
>  Components: config
>Affects Versions: 3.0.0
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> In 3.0.0 when dealing with Tika, it would greatly help to have a Typed 
> Configuration schema. 
> In 3.x can we remove the old way of doing configs and replace with Json 
> Schema?
> Json Schema can be converted to Pojos using a maven plugin 
> [https://github.com/joelittlejohn/jsonschema2pojo]
> This automatically creates a Java Pojo model we can use for the configs. 
> This can allow for the legacy tika-config XML to be read and converted to the 
> new pojos easily using an XML mapper so that users don't have to use JSON 
> configurations yet if they do not want.
> When complete, configurations can be set as XML, JSON or YAML
> tika-config.xml
> tika-config.json
> tika-config.yaml
> Replace all instances of tika config annotations that used the old syntax, 
> and replace with the Pojo model serialized from the xml/json/yaml.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:39 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _[if we use the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CacheMeterBinder.java],_
 would it be ok to not have *"artemis.authentication."* prefixes, and adding 
*authentication* and *authorization* as Tags? i.e. {{{}Tag("cacheName", 
"authentication"){}}}and {{{}Tag("{}}}{{{}cacheName{}}}{{{}", 
"authorization"){}}} It seems this might be more of a standard too for 
micrometer users possibly.

 

However, now that we are using the 
[CaffeineCache|https://github.com/ben-manes/caffeine/blob/b4cedbc411130b8e78c51effdd527756bdf1ff55/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java],
 I see there are two concrete classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class _(Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)_
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 _(Seems like best option)_
 * Create another concrete cl

[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:38 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _[if we use the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CacheMeterBinder.java],_
 would it be ok to not have *"artemis.authentication."* prefixes, and adding 
*authentication* and *authorization* as Tags? i.e. {{{}Tag("cacheName", 
"authentication"){}}}and {{{}Tag("{}}}{{{}cacheName{}}}{{{}", 
"authorization"){}}} It seems this might be more of a standard too for 
micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class _(Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)_
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 _(Seems like best option)_
 * Create another concrete class (or maybe even extend the concrete class 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f5

[jira] [Commented] (OFBIZ-12765) Improvement to createPartyRelationship service

2024-05-23 Thread Michael Brohl (Jira)


[ 
https://issues.apache.org/jira/browse/OFBIZ-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849113#comment-17849113
 ] 

Michael Brohl commented on OFBIZ-12765:
---

[~thahn] sorry, I meant to address this to [~cshan] 

> Improvement to createPartyRelationship service 
> ---
>
> Key: OFBIZ-12765
> URL: https://issues.apache.org/jira/browse/OFBIZ-12765
> Project: OFBiz
>  Issue Type: Improvement
>  Components: party
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Minor
>
> Currently, createPartyRelationship service does not allow the creation of a 
> new PartyRelationship of the same parties until the thruDate has passed. This 
> also disallows the creation new PartyRelationships of the same parties beyond 
> that thruDate.
> This improvement checks for time interval conflicts and allows the creation 
> of a different PartyRelationship of the same parties before the thruDate has 
> passed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OFBIZ-12765) Improvement to createPartyRelationship service

2024-05-23 Thread Michael Brohl (Jira)


[ 
https://issues.apache.org/jira/browse/OFBIZ-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849112#comment-17849112
 ] 

Michael Brohl commented on OFBIZ-12765:
---

[~thahn] can you please resolve the PR conflicts and provide an new PR, thanks!

> Improvement to createPartyRelationship service 
> ---
>
> Key: OFBIZ-12765
> URL: https://issues.apache.org/jira/browse/OFBIZ-12765
> Project: OFBiz
>  Issue Type: Improvement
>  Components: party
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Minor
>
> Currently, createPartyRelationship service does not allow the creation of a 
> new PartyRelationship of the same parties until the thruDate has passed. This 
> also disallows the creation new PartyRelationships of the same parties beyond 
> that thruDate.
> This improvement checks for time interval conflicts and allows the creation 
> of a different PartyRelationship of the same parties before the thruDate has 
> passed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HDDS-10750) Intermittent fork timeout while stopping Ratis server

2024-05-23 Thread Chung En Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849111#comment-17849111
 ] 

Chung En Lee commented on HDDS-10750:
-

[~adoroszlai] , I think it happens when closing LogAppender from NEW. I created 
a Jira issure RATIS-2100 for this.

> Intermittent fork timeout while stopping Ratis server
> -
>
> Key: HDDS-10750
> URL: https://issues.apache.org/jira/browse/HDDS-10750
> Project: Apache Ozone
>  Issue Type: Sub-task
>Reporter: Attila Doroszlai
>Priority: Critical
> Attachments: 2024-04-21T16-53-06_683-jvmRun1.dump, 
> 2024-05-03T11-31-12_561-jvmRun1.dump, 
> org.apache.hadoop.fs.ozone.TestOzoneFileChecksum-output.txt, 
> org.apache.hadoop.hdds.scm.TestSCMInstallSnapshot-output.txt, 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy-output.txt,
>  org.apache.hadoop.ozone.container.TestECContainerRecovery-output-1.txt, 
> org.apache.hadoop.ozone.container.TestECContainerRecovery-output.txt, 
> org.apache.hadoop.ozone.om.TestOzoneManagerPrepare-output.txt
>
>
> {code:title=https://github.com/adoroszlai/ozone-build-results/blob/master/2024/04/21/30803/it-client/output.log}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestECKeyOutputStreamWithZeroCopy
> [INFO] 
> [INFO] Results:
> ...
> ... There was a timeout or other error in the fork
> {code}
> {code}
> "main" 
>java.lang.Thread.State: WAITING
> at java.lang.Object.wait(Native Method)
> at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
> ...
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanodes(MiniOzoneClusterImpl.java:473)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:414)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:400)
> at 
> org.apache.hadoop.ozone.client.rpc.AbstractTestECKeyOutputStream.shutdown(AbstractTestECKeyOutputStream.java:160)
> "ForkJoinPool.commonPool-worker-7" 
>java.lang.Thread.State: TIMED_WAITING
> ...
> at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:144)
> at 
> org.apache.ratis.util.ConcurrentUtils.shutdownAndWait(ConcurrentUtils.java:136)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$close$9(RaftServerProxy.java:438)
> ...
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:415)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:603)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:484)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:447)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:637)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:550)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.stopDatanode(MiniOzoneClusterImpl.java:479)
> at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$$Lambda$2077/645273703.accept(Unknown
>  Source)
> "c7edee5d-bf3c-45a7-a783-e11562f208dc-impl-thread2" 
>java.lang.Thread.State: WAITING
> ...
> at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.lambda$close$3(RaftServerImpl.java:543)
> at 
> org.apache.ratis.server.impl.RaftServerImpl$$Lambda$1925/263251010.run(Unknown
>  Source)
> at 
> org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$7(LifeCycle.java:306)
> at org.apache.ratis.util.LifeCycle$$Lambda$1204/655954062.get(Unknown 
> Source)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:326)
> at 
> org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:304)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.close(RaftServerImpl.java:525)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[jira] [Commented] (OFBIZ-12815) EntityUtil getProperty Methods dont use entity

2024-05-23 Thread Michael Brohl (Jira)


[ 
https://issues.apache.org/jira/browse/OFBIZ-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849110#comment-17849110
 ] 

Michael Brohl commented on OFBIZ-12815:
---

[~thahn] please check the comments in the pull requests, thanks!

> EntityUtil getProperty Methods dont use entity
> --
>
> Key: OFBIZ-12815
> URL: https://issues.apache.org/jira/browse/OFBIZ-12815
> Project: OFBiz
>  Issue Type: Improvement
>  Components: framework/entity
>Affects Versions: 18.12.07
>Reporter: Tobias Hahn
>Assignee: Michael Brohl
>Priority: Minor
> Fix For: Upcoming Branch
>
>
> The getProperty methods in EntityUtilProperties don't use entity at all. All 
> of the getProperty methods simply lead to UtilProperties and therefore no 
> configure during runtime is possible. New methods have been written so the 
> entity usage is now functional.
> Due to a wrong commit description/title the PR #634 were closed. i opened a 
> new PR #635. 
> Also i will adjust the code with an upcoming commit, thanks to Gil who 
> commented on PR #634.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:32 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class _(Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)_
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 _(Seems like best option)_
 * Create another concrete class (or maybe even extend the concrete class 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/mic

[jira] [Created] (RATIS-2100) The `closeFuture` never completed while closing from the `NEW` state.

2024-05-23 Thread Chung En Lee (Jira)
Chung En Lee created RATIS-2100:
---

 Summary: The `closeFuture` never completed while closing from the 
`NEW` state.
 Key: RATIS-2100
 URL: https://issues.apache.org/jira/browse/RATIS-2100
 Project: Ratis
  Issue Type: Bug
Reporter: Chung En Lee


Currently, the {{closeFuture}} only completes after the {{LogAppenderDaemon}} 
has started. However, when closing from the {{NEW}} state, the transition is 
{{NEW}} -> {{{}CLOSED{}}}, and the {{LogAppenderDaemon}} was not started.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:31 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class _(Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)_
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 _(Seems like best option)_
 * Create another concrete class (or maybe even extend the concrete class 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/mic

[jira] [Updated] (NIFI-12226) Maven plugin: Add flag to skip doc generation

2024-05-23 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12226:

Fix Version/s: nifi-nar-maven-plugin-2.0.0

> Maven plugin: Add flag to skip doc generation
> -
>
> Key: NIFI-12226
> URL: https://issues.apache.org/jira/browse/NIFI-12226
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Nicolò Boschi
>Priority: Minor
> Fix For: nifi-nar-maven-plugin-2.0.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In [LangStream/langstream|https://github.com/LangStream/langstream] we use 
> this plugin for producing NAR files. The NAR are then ingested by the 
> LangStream runtime and the documentation is not needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12226) Maven plugin: Add flag to skip doc generation

2024-05-23 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-12226:

Fix Version/s: (was: nifi-nar-maven-plugin-2.0.0)

> Maven plugin: Add flag to skip doc generation
> -
>
> Key: NIFI-12226
> URL: https://issues.apache.org/jira/browse/NIFI-12226
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Nicolò Boschi
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In [LangStream/langstream|https://github.com/LangStream/langstream] we use 
> this plugin for producing NAR files. The NAR are then ingested by the 
> LangStream runtime and the documentation is not needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:29 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class _(Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)_
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 _(Seems like best option)_
 * Create another concrete class (or maybe even extend the concrete class 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/mic

[jira] [Commented] (SOLR-16505) Switch UpdateShardHandler.getRecoveryOnlyHttpClient to Jetty HTTP2

2024-05-23 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849109#comment-17849109
 ] 

David Smiley commented on SOLR-16505:
-

Can this be resolved again?  If so, please do it Sanjay.

> Switch UpdateShardHandler.getRecoveryOnlyHttpClient to Jetty HTTP2
> --
>
> Key: SOLR-16505
> URL: https://issues.apache.org/jira/browse/SOLR-16505
> Project: Solr
>  Issue Type: Sub-task
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 9.7
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> This method and its callers (only RecoveryStrategy) should be converted to a 
> Jetty HTTP2 client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (CASSANDRA-19658) Test failure: replace_address_test.py::TestReplaceAddress::test_restart_failed_replace

2024-05-23 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-19658:
-
Test and Documentation Plan: run CI
 Status: Patch Available  (was: Open)

> Test failure: 
> replace_address_test.py::TestReplaceAddress::test_restart_failed_replace
> --
>
> Key: CASSANDRA-19658
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19658
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1.x, 5.0.x
>
>
> This can be seen failing in butler: 
> https://butler.cassandra.apache.org/#/ci/upstream/workflow/Cassandra-5.0/failure/replace_address_test/TestReplaceAddress/test_restart_failed_replace
> {noformat}
> ccmlib.node.TimeoutError: 14 May 2024 18:19:08 [node1] after 120.13/120 
> seconds Missing: ['FatClient /127.0.0.4:7000 has been silent for 3ms, 
> removing from gossip'] not found in system.log:
> {noformat} 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (TIKA-4243) tika configuration overhaul

2024-05-23 Thread Tim Allison (Jira)


[ 
https://issues.apache.org/jira/browse/TIKA-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849108#comment-17849108
 ] 

Tim Allison commented on TIKA-4243:
---

The downsides we see:
a) if we there's agreement to add jackson-annotations to tika-core, we add a 
few kb to tika-core
b) we're at risk of having jackson-annotations sprinkled throughout our 
codebase on the XConfig classes, but this is basically where we have our own 
@Field annotations now. So break even?
c)  Customized classes that need to be passed via the ParseContext will need to 
be serializable to be used in tika-server, tika-pipes...etc. anything that 
allows for configuration.

> tika configuration overhaul
> ---
>
> Key: TIKA-4243
> URL: https://issues.apache.org/jira/browse/TIKA-4243
> Project: Tika
>  Issue Type: New Feature
>  Components: config
>Affects Versions: 3.0.0
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> In 3.0.0 when dealing with Tika, it would greatly help to have a Typed 
> Configuration schema. 
> In 3.x can we remove the old way of doing configs and replace with Json 
> Schema?
> Json Schema can be converted to Pojos using a maven plugin 
> [https://github.com/joelittlejohn/jsonschema2pojo]
> This automatically creates a Java Pojo model we can use for the configs. 
> This can allow for the legacy tika-config XML to be read and converted to the 
> new pojos easily using an XML mapper so that users don't have to use JSON 
> configurations yet if they do not want.
> When complete, configurations can be set as XML, JSON or YAML
> tika-config.xml
> tika-config.json
> tika-config.yaml
> Replace all instances of tika config annotations that used the old syntax, 
> and replace with the Pojo model serialized from the xml/json/yaml.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OFBIZ-12829) Improvements for ContentWorker methods and view-entities

2024-05-23 Thread Michael Brohl (Jira)


 [ 
https://issues.apache.org/jira/browse/OFBIZ-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Brohl closed OFBIZ-12829.
-
Fix Version/s: Upcoming Branch
   Resolution: Implemented

Thanks [~cshan] !

> Improvements for ContentWorker methods and view-entities
> 
>
> Key: OFBIZ-12829
> URL: https://issues.apache.org/jira/browse/OFBIZ-12829
> Project: OFBiz
>  Issue Type: Improvement
>  Components: content
>Affects Versions: Upcoming Branch
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Minor
> Fix For: Upcoming Branch
>
>
> Adds additional methods to ContentWorker:
>  * New method findAlternateLocalContents to find all alternate locale 
> contents instead of just one specific.
>  * Overloaded methods of findAlternateLocalContents and 
> findAlternateLocalContent to enable/disable cache use.
>  * These methods are no longer case sensitive when comparing localeStrings
> Changes to view-entites ProductContentAndInfo and 
> ProductCategoryContentAndInfo:
>  * Both now use an outer join instead of inner join between DataResource and 
> Content, because there may be a Content object without a DataResource for its 
> locale but with alternate locale content objects associated to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:28 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or maybe even extend the concrete class 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micromet

[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:27 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 offers more detailed statistics than 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and it allows name prefixes. However, 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 needs to have the meterRegistry already there {_}at the time the cache is 
built{_}. There is no direct support to add a 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 to a Caffeine Cache after the cache is built. So if you use the 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 you might have to do something like call the 
[MetricsConfiguration|https://github.com/apache/activemq-artemis/blob/c47713454caeece82df29a0a7fd4a2a39000576b/artemis-server/src/main/java/org/apache/activemq/artemis/core/config/MetricsConfiguration.java]
 from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or extend 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f

[jira] [Commented] (OFBIZ-12829) Improvements for ContentWorker methods and view-entities

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/OFBIZ-12829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849106#comment-17849106
 ] 

ASF subversion and git services commented on OFBIZ-12829:
-

Commit e7420fe4cf40f21e03bba3566d9b99d63a6e79a5 in ofbiz-framework's branch 
refs/heads/trunk from Cheng Hu Shan
[ https://gitbox.apache.org/repos/asf?p=ofbiz-framework.git;h=e7420fe4cf ]

Improvements for ContentWorker methods and view-entities (OFBIZ-12829)

> Improvements for ContentWorker methods and view-entities
> 
>
> Key: OFBIZ-12829
> URL: https://issues.apache.org/jira/browse/OFBIZ-12829
> Project: OFBiz
>  Issue Type: Improvement
>  Components: content
>Affects Versions: Upcoming Branch
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Minor
>
> Adds additional methods to ContentWorker:
>  * New method findAlternateLocalContents to find all alternate locale 
> contents instead of just one specific.
>  * Overloaded methods of findAlternateLocalContents and 
> findAlternateLocalContent to enable/disable cache use.
>  * These methods are no longer case sensitive when comparing localeStrings
> Changes to view-entites ProductContentAndInfo and 
> ProductCategoryContentAndInfo:
>  * Both now use an outer join instead of inner join between DataResource and 
> Content, because there may be a Content object without a DataResource for its 
> locale but with alternate locale content objects associated to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (OFBIZ-12843) Refactoring WebSiteProperties.java

2024-05-23 Thread Michael Brohl (Jira)


 [ 
https://issues.apache.org/jira/browse/OFBIZ-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Brohl closed OFBIZ-12843.
-
Fix Version/s: Upcoming Branch
   Resolution: Implemented

Thanks [~cshan] !

> Refactoring WebSiteProperties.java
> --
>
> Key: OFBIZ-12843
> URL: https://issues.apache.org/jira/browse/OFBIZ-12843
> Project: OFBiz
>  Issue Type: Improvement
>  Components: framework/webapp
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Trivial
> Fix For: Upcoming Branch
>
>
> Class WebSiteProperties.java contains some duplicate code and should be 
> refactored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OFBIZ-12843) Refactoring WebSiteProperties.java

2024-05-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/OFBIZ-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849104#comment-17849104
 ] 

ASF subversion and git services commented on OFBIZ-12843:
-

Commit a1e700b7d2d801d0c74298ca22c28c2e373281d9 in ofbiz-framework's branch 
refs/heads/trunk from Cheng Hu Shan
[ https://gitbox.apache.org/repos/asf?p=ofbiz-framework.git;h=a1e700b7d2 ]

Improved: Refactoring WebSiteProperties.java (OFBIZ-12843)

> Refactoring WebSiteProperties.java
> --
>
> Key: OFBIZ-12843
> URL: https://issues.apache.org/jira/browse/OFBIZ-12843
> Project: OFBiz
>  Issue Type: Improvement
>  Components: framework/webapp
>Reporter: Chenghu Shan
>Assignee: Michael Brohl
>Priority: Trivial
>
> Class WebSiteProperties.java contains some duplicate code and should be 
> refactored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:25 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. *CaffeineStatsCounter* offers more detailed statistics than 
*CaffeineCacheMetrics* and it allows name prefixes. However, 
*CaffeineStatsCounter* needs to have the meterRegistry already there {_}at the 
time the cache is built{_}. There is no direct support to add a 
*CaffeineStatsCounter* to a Caffeine Cache after the cache is built. So if you 
use the *CaffeineStatsCounter* you might have to do something like call the 
*MetricsConfiguration* from the 
*[SecurityStoreImpl|https://github.com/apache/activemq-artemis/blob/main/artemis-server/src/main/java/org/apache/activemq/artemis/core/security/impl/SecurityStoreImpl.java#L105-L108]*
 like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or extend 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 (Seems overkill)
 * Create some sort of decorator that would intercept the cache operations and 
delegate to an underlying cache (Seems overkill)

{code:java}
public class StatsCounterDecorator implements Cache { 
private final Cache delegate; 
private final StatsCounter statsCounter;
...

@Override
public void put(K key, V value) {
delegate.put(key, value);
}


public static void main(String[] args) { 

//get already initialized cache
    server
  .getSecurityStore()
  .getAuthenticationMetrics()     

StatsCounter statsCounter = new ConcurrentStatsCounter();
Cache cacheWithStats = new 
StatsCounterDecorator<>(originalCache, statsCounter);
}{code}
 


was (Author: JIRAUSER301522):
 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR 

[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:24 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. *CaffeineStatsCounter* offers more detailed statistics than 
*CaffeineCacheMetrics* and it allows name prefixes. However, 
*CaffeineStatsCounter* needs to have the meterRegistry already there {_}at the 
time the cache is built{_}. There is no direct support to add a 
*CaffeineStatsCounter* to a Caffeine Cache after the cache is built. So if you 
use the *CaffeineStatsCounter* you might have to do something like call the 
*MetricsConfiguration* from the *SecurityStoreImpl* like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or extend 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 (Seems overkill)
 * Create some sort of decorator that would intercept the cache operations and 
delegate to an underlying cache (Seems overkill)

{code:java}
public class StatsCounterDecorator implements Cache { 
private final Cache delegate; 
private final StatsCounter statsCounter;
...

@Override
public void put(K key, V value) {
delegate.put(key, value);
}


public static void main(String[] args) { 

//get already initialized cache
    server
  .getSecurityStore()
  .getAuthenticationMetrics()     

StatsCounter statsCounter = new ConcurrentStatsCounter();
Cache cacheWithStats = new 
StatsCounterDecorator<>(originalCache, statsCounter);
}{code}
 


was (Author: JIRAUSER301522):
 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication.

[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:23 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this 
decision. *CaffeineStatsCounter* offers more detailed statistics than 
*CaffeineCacheMetrics* and it allows name prefixes. However, 
*CaffeineStatsCounter* needs to have the meterRegistry already there {_}at the 
time the cache is built{_}. There is no direct support to add a 
*CaffeineStatsCounter* to a Caffeine Cache after the cache is built. So if you 
use the *CaffeineStatsCounter* you might have to do something like call the 
*MetricsConfiguration* from the *SecurityStoreImpl* like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or extend 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 (Seems overkill)
 * Create some sort of decorator that would intercept the cache operations and 
delegate to an underlying cache (Seems overkill)

{code:java}
public class StatsCounterDecorator implements Cache { 
private final Cache delegate; 
private final StatsCounter statsCounter;
...

@Override
public void put(K key, V value) {
delegate.put(key, value);
}


public static void main(String[] args) { 

//get already initialized cache
    server
  .getSecurityStore()
  .getAuthenticationMetrics()     

StatsCounter statsCounter = new ConcurrentStatsCounter();
Cache cacheWithStats = new 
StatsCounterDecorator<>(originalCache, statsCounter);
}{code}
 


was (Author: JIRAUSER301522):
 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication.

[jira] [Comment Edited] (ARTEMIS-4306) Add authn/z metrics

2024-05-23 Thread Mike Artz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849099#comment-17849099
 ] 

Mike Artz edited comment on ARTEMIS-4306 at 5/23/24 8:21 PM:
-

 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("cacheName", "authentication"){}}}and 
{{{}Tag("{}}}{{{}cacheName{}}}{{{}", "authorization"){}}} It seems this might 
be more of a standard too for micrometer users possibly.

 

 

However, now that we are using the CaffeineCache, I see there are two concrete 
classes - 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 and 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java].
 Both of these look ok at first (more or less), but I am stuck at this decision 
because *CaffeineStatsCounter* offers more statistics and it allows name 
prefixes. However, it needs to have the registry at the time the cache is 
created. There is no direct support to create a metrics counter after the cache 
is built. So if you use the *CaffeineStatsCounter* you might have to do 
something like call the *MetricsConfiguration* from the *SecurityStoreImpl* like
{code:java}
MeterRegistry registry = metricsConfiguration.getPlugin().getRegistry();

authenticationCache = Caffeine.newBuilder()
   .maximumSize(authenticationCacheSize)
   .expireAfterWrite(invalidationInterval, TimeUnit.MILLISECONDS)
   .recordStats(() -> new CaffeineStatsCounter(registry, "authentication"))
   .build();{code}
And that doesnt make any sense because *MetricsConfiguration* happens after the 
*SecurityStoreImpl.* So this doesnt seem like the best option.

*Other Options*
 * Use 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 could initialize the authn cache (and authz cache) from the MetricsManager 
class (Couples the SecurityStoreImpl to the MetricsManager but maybe 2nd best 
option)
 * Use 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java]
 (Seems like best option)
 * Create another concrete class (or extend 
[CaffeineCacheMetrics|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineCacheMetrics.java])
 that does as much as 
[CaffeineStatsCounter|https://github.com/micrometer-metrics/micrometer/blob/37883fa6fb4a6d3f83d01f6b53101cc9f52b3f78/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/cache/CaffeineStatsCounter.java]
 (Seems overkill)
 * Create some sort of decorator that would intercept the cache operations and 
delegate to an underlying cache (Seems overkill)

{code:java}
public class StatsCounterDecorator implements Cache { 
private final Cache delegate; 
private final StatsCounter statsCounter;
...

@Override
public void put(K key, V value) {
delegate.put(key, value);
}


public static void main(String[] args) { 

//get already initialized cache
    server
  .getSecurityStore()
  .getAuthenticationMetrics()     

StatsCounter statsCounter = new ConcurrentStatsCounter();
Cache cacheWithStats = new 
StatsCounterDecorator<>(originalCache, statsCounter);
}{code}
 


was (Author: JIRAUSER301522):
 

[~jbertram] - I made this [PR for micrometer to allow prefixing in the 
CacheMeterBinder|https://github.com/micrometer-metrics/micrometer/pull/4048],  
so that we could for example add the *"artemis.authentication."*
prefix but the PR _kind of_ got stuck, and the PR got closed. Then I had a kid 
and had a new job. 

 

Coming back to this now, _if we use the CacheMeterBinder,_ would it be ok to 
not have *"artemis.authentication."* prefixes, and adding *authentication* and 
*authorization* as Tags? i.e. {{{}Tag("

[jira] [Commented] (TIKA-4243) tika configuration overhaul

2024-05-23 Thread Tim Allison (Jira)


[ 
https://issues.apache.org/jira/browse/TIKA-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849103#comment-17849103
 ] 

Tim Allison commented on TIKA-4243:
---

Proposed basic roadmap:

Serialize ParseContext as is...
Allow for serialization of current XConfigs, eg. PDFParserConfig, etc.
Add creation of parsers with e.g. new PDFParser(ParseContext context).
Wire config stuff into tika-server, tika-pipes, tika-app
Merge tika-grpc-server with new config options

This would require serialization of classes that users want to be able to 
configure + serialization.

This would allow us to get rid of all of our custom serialization stuff for 
Tika 4.x.


> tika configuration overhaul
> ---
>
> Key: TIKA-4243
> URL: https://issues.apache.org/jira/browse/TIKA-4243
> Project: Tika
>  Issue Type: New Feature
>  Components: config
>Affects Versions: 3.0.0
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> In 3.0.0 when dealing with Tika, it would greatly help to have a Typed 
> Configuration schema. 
> In 3.x can we remove the old way of doing configs and replace with Json 
> Schema?
> Json Schema can be converted to Pojos using a maven plugin 
> [https://github.com/joelittlejohn/jsonschema2pojo]
> This automatically creates a Java Pojo model we can use for the configs. 
> This can allow for the legacy tika-config XML to be read and converted to the 
> new pojos easily using an XML mapper so that users don't have to use JSON 
> configurations yet if they do not want.
> When complete, configurations can be set as XML, JSON or YAML
> tika-config.xml
> tika-config.json
> tika-config.yaml
> Replace all instances of tika config annotations that used the old syntax, 
> and replace with the Pojo model serialized from the xml/json/yaml.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >