Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-07-02 Thread Sanjana Kaundinya
Hello Everyone,

I recently opened a PR for KIP-709 and it was pointed out that we still need to 
come to a consensus on the Admin APIs. Specifically the concern was around the 
`ListConsumerGroupOffsetsOptions` class. Currently that class contains a 
List that acts as a filter for the specific topic partitions 
the client wants to fetch offsets for a specific group. Originally I had 
planned to extend this by adding a map of type Map> so when specifying topic partitions, the called could 
specify it on a per group basis with the `ListConsumerGroupOffsetsOptions` 
class. However it was noted that this is not the typical way that the “Options” 
class is used for the requests. Instead they’re normally used as additional 
options for the request, and generally the data for the request is passed in as 
a constructor. Since we are taking the time to change this API, might as well 
try to use some best practices and change how we use the 
`ListConsumerGroupOffsetsOptions` class. I propose we change the 
`listConsumerGroupOffsets` API as follows:

Earlier it was proposed that the following will be the method signatures we 
would add to Admin.java:

default ListConsumerGroupOffsetsResult listConsumerGroupOffsets(List 
groupIds) {
   return listConsumerGroupOffsets(groupIds, new 
ListConsumerGroupOffsetsOptions(groupIds));
}
ListConsumerGroupOffsetsResult listConsumerGroupOffsets(List groupIds, 
ListConsumerGroupOffsetsOptions options);

I propose we change the signatures to the following instead:

default ListConsumerGroupOffsetsResult listConsumerGroupOffsets(Map> groupToTopicPartitions) {
return listConsumerGroupOffsets(groupToTopicPartitions, new 
ListConsumerGroupOffsetOptions());
}
ListConsumerGroupOffsetsResult list listConsumerGroupOffsets(Map> groupToTopicPartitions, ListConsumerGroupOffsetOptions 
options);

This way we are transferring the data for the requests passed in as parameters 
and this frees up the ListConsumerGroupOffsetOptions class to be used in the 
future to apply different options to the request. Eventually we will deprecate 
the single group listConsumerGroupOffsets method, and with that the 
ListConsumerGroupOffsetsOptions method signature will also be different, no 
longer storing the data for the topic partitions that we want to retrieve 
offsets for. In essence, as part of this change, we will leave the 
ListConsumerGroupOffsetsOptions unchanged, and eventually remove the 
List we store there when we remove the deprecated single 
listConsumerGroupOffsets method.

Appreciate any feedback/discussion on this - thank you!

Cheers,
Sanjana
On May 14, 2021, 4:07 PM -0700, Sanjana Kaundinya , wrote:
> Hi Everyone,
> I’ve begun working on this KIP now and found that another class will be 
> needing public changes. I have updated the KIP to reflect this, so just 
> wanted to update the dev list as well. You can find the updated KIP here: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> Thanks,
> Sanjana
> On Jan 27, 2021, 4:18 AM -0800, Thomas Scott , wrote:
> > Hi Magnus,
> >
> > Thanks for the review, I've added //moved and explanation as requested.
> >
> > Thanks
> >
> > Tom
> >
> >
> > On Wed, Jan 27, 2021 at 12:05 PM Magnus Edenhill  wrote:
> >
> > > Hey Thomas,
> > >
> > > I'm late to the game.
> > >
> > > It looks like the "top level" ErrorCode moved from the top-level to the
> > > Group array, which makes sense,
> > > but it would be good if it was marked as // MOVED in the KIP and also a
> > > note that top level errors that
> > > are unrelated to the group will be returned as per-group errors.
> > >
> > >
> > > Regards,
> > > Magnus
> > >
> > >
> > > Den tis 26 jan. 2021 kl 15:42 skrev Thomas Scott :
> > >
> > > > Thanks David I've updated it.
> > > >
> > > > On Tue, Jan 26, 2021 at 1:55 PM David Jacot  wrote:
> > > >
> > > > > Great. That answers my question!
> > > > >
> > > > > Thomas, I suggest adding a Related/Future Work section in the
> > > > > KIP to link KIP-699 more explicitly.
> > > > >
> > > > > Thanks,
> > > > > David
> > > > >
> > > > > On Tue, Jan 26, 2021 at 1:30 PM Thomas Scott  
> > > > > wrote:
> > > > >
> > > > > > Hi Mickael/David,
> > > > > >
> > > > > > I feel like the combination of these 2 KIPs gives the complete
> > > > solution
> > > > > > but they can be implemented independently. I have added a 
> > > > > > description
> > > > and
> > > > > > links to KIP-699 to KIP-709 to this effect.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > >
> > > > > > On Tue, Jan 26, 2021 at 11:44 AM Mickael Maison <
> > > > > mickael.mai...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Thomas,
> > > > > > > Thanks, the KIP looks good.
> > > > > > >
> > > > > > > David,
> > > > > > > I started working on exactly that a few weeks ago:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+FindCoordinators
> > > > > > > I hope 

[jira] [Created] (KAFKA-13030) FindCoordinators batching make slow poll when requesting older broker

2021-07-02 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-13030:
--

 Summary: FindCoordinators batching make slow poll when requesting 
older broker
 Key: KAFKA-13030
 URL: https://issues.apache.org/jira/browse/KAFKA-13030
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L888

{quote}
if (e instanceof NoBatchedFindCoordinatorsException) {
batchFindCoordinator = false;
clearFindCoordinatorFuture();
lookupCoordinator();
return;
} 
{quote}


The current request future is NOT updated so it can't be completed until 
timeout. It causes a slow poll when users first poll data from older broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13029) FindCoordinators batching can break consumers during rolling upgrade

2021-07-02 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-13029:
--

 Summary: FindCoordinators batching can break consumers during 
rolling upgrade
 Key: KAFKA-13029
 URL: https://issues.apache.org/jira/browse/KAFKA-13029
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 3.0.0


The changes made under KIP-699 assume that it is always safe to use unbatched 
mode since we move from batched to unbatched and cache the value forever in 
clients if a broker doesn't support batching. During rolling upgrade, if a 
request is sent to an older broker, we move from batched to unbatched mode. The 
consumer (admin client as well I think) disables batching and future requests 
to upgraded brokers will fail because we attempt to use unbatched requests with 
a newer version of the request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13028) AbstractConfig should allow config provider configuration to use variables referencing other config providers earlier in the list

2021-07-02 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-13028:
-

 Summary: AbstractConfig should allow config provider configuration 
to use variables referencing other config providers earlier in the list
 Key: KAFKA-13028
 URL: https://issues.apache.org/jira/browse/KAFKA-13028
 Project: Kafka
  Issue Type: Improvement
  Components: clients, KafkaConnect
Reporter: Randall Hauch


When AbstractConfig recognizes config provider properties, it instantiates all 
of the config providers first and then uses those config providers to resolve 
any variables in remaining configurations. This means that if you define two 
config providers with:

{code}
config.providers=providerA,providerB
...
{code}
then the configuration properties for the second provider (e.g., `providerB`) 
cannot use variables that reference the first provider (e.g., `providerA`). In 
other words, this is not possible:

{code}
config.providers=providerA,providerB
config.providers.providerA.class=FileConfigProvider
config.providers.providerB.class=ComplexConfigProvider
config.providers.providerA.param.client.key=${file:/usr/secrets:complex.client.key}
config.providers.providerA.param.client.secret=${file:/usr/secrets:complex.client.secret}
{code}

This should be possible if the config providers are instantiated and configured 
in the same order as they appear in the `config.providers` property. The 
benefit is that it allows another level of indirection so that any secrets 
required by config provider can be resolved using an earlier simple config 
provider.

For example, config providers are often defined in Connect worker 
configurations to resolve secrets within connector configurations, or to 
resolve secrets within the worker configuration itself (e.g., producer or 
consumer secrets). But it would be useful to also be able to resolve the 
secrets needed by one configuration provider using another configuration 
provider that is defined earlier in the list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-690: Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-07-02 Thread Omnia Ibrahim
Hi All,

Just thought of bumping this voting thread again to see if we can form a
consensus around this.

Thanks

On Thu, Jun 24, 2021 at 5:55 PM Mickael Maison 
wrote:

> +1 (binding)
> Thanks for the KIP!
>
> On Tue, May 4, 2021 at 3:23 PM Igor Soarez 
> wrote:
> >
> > Another +1 here, also non-binding.
> >
> > Thank you Omnia!
> >
> > --
> > Igor
> >
> >
> > On Fri, Apr 30, 2021, at 3:15 PM, Ryanne Dolan wrote:
> > > +1 (non-binding), thanks!
> > >
> > > On Thu, Jan 21, 2021, 4:31 AM Omnia Ibrahim 
> wrote:
> > >
> > >> Hi
> > >> Can I get a vote on this, please?
> > >>
> > >> Best
> > >> Omnia
> > >>
> > >> On Tue, Dec 15, 2020 at 12:16 PM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> > >> wrote:
> > >>
> > >>> If anyone interested in reading the discussions you can find it here
> > >>> https://www.mail-archive.com/dev@kafka.apache.org/msg113373.html
> > >>>
> > >>> On Tue, Dec 8, 2020 at 4:01 PM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> > >>> wrote:
> > >>>
> >  Hi everyone,
> >  I’m proposing a new KIP for MirrorMaker 2 to add the ability to
> control
> >  internal topics naming convention. The proposal details are here
> > 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-690%3A+Add+additional+configuration+to+control+MirrorMaker+2+internal+topics+naming+convention
> > 
> >  Please vote in this thread.
> >  Thanks
> >  Omnia
> > 
> > >>>
> > >
>


Re: [DISCUSS] KIP-690 Add additional configuration to control MirrorMaker 2 internal topics naming convention

2021-07-02 Thread Omnia Ibrahim
@Gwen, can you please have a second look at the proposal and my explanation
for why this approach is better for MM2?

On Thu, Jun 24, 2021 at 5:55 PM Mickael Maison 
wrote:

> Hi Omnia,
>
> I think the current proposal makes sense. Thanks for driving this feature.
>
> On Mon, Jun 21, 2021 at 11:51 PM Ryanne Dolan 
> wrote:
> >
> > Omnia, I agree with you that allowing users to specify the whole topic
> name via configuration is likely to create problems. MM2 must distinguish
> between internal topics from different clusters, and pushing that
> complexity into configuration sounds really complicated.
> >
> > I like the ReplicationPolicy approach.
> >
> > Ryanne
> >
> > On Mon, Jun 21, 2021, 2:04 PM Omnia Ibrahim 
> wrote:
> >>
> >> Any thoughts on this KIP?
> >>
> >>
> >> On Thu, Jun 17, 2021 at 1:38 PM Omnia Ibrahim 
> >> wrote:
> >>
> >> > Another reason why I think adding a configuration for each internal
> topic is not a good solution is how MM2 is naming these topics at the
> moment.
> >> > Right now MM2 sets the name of the offset-syncs topic to
> mm2-offset-syncs..internal and for checkpoints is
> .checkpoints.internal so the name has a pattern to link it
> back to the herder of source -> target mirror link so having this in
> configuration will lead to
> >> > 1. having a method that determines the final name of internal topics
> for backward compatibility and have this method to be the default of the
> configuration values. The challenge here is that we need to load first the
> clusters alias to calculate the default value for offset-syncs.topic.name
> and checkpoints.topic.name.
> >> > 2. Consider use cases where MM2 is used to mirror between multiple
> clusters, for example:
> >> > source1 -> target.enabled = true
> >> > source2 -> target.enabled = true
> >> > For this use-case the current behaviour will create the following
> offset-syncs and checkpoints on each cluster:
> >> > source1 cluster
> >> > - mm2-offset-syncs.target.internal
> >> > source2 cluster
> >> > - mm2-offset-syncs.target.internal
> >> > target cluster
> >> > - source1.checkpoints.internal
> >> > - source2.checkpoints.internal
> >> > As MM2 design in the original KIP-382 is spliting internal topics
> bsed on mirroring links. Now if we let MM2 users set the full name of these
> topics as configuration, how will we detect if the user has a wrong
> configuration where they used the same name for checkpoints topic for both
> source1 and source2. How this will work if both source1 and source2
> clusters have consumer groups with same ids as checkpoints topic messages
> contains consumer group id? Should we warn the MM2 user that this topic has
> been used before for another source cluster? If not how will the MM2 user
> notice that?
> >> >
> >> >
> >> > On Mon, Jun 14, 2021 at 5:54 PM Omnia Ibrahim <
> o.g.h.ibra...@gmail.com>
> >> > wrote:
> >> >
> >> >> Hi folks, let me try to clarify some of your concerns and questions.
> >> >>
> >> >> Mickael: Have you considered making names changeable via
> configurations?
> >> >>>
> >> >>
> >> >> Gwen: may be missing something, but we are looking at 3 new configs
> (one
> >> >>> for each topic). And this rejected alternative is basically
> identical to
> >> >>> what Connect already does (you can choose names for internal topics
> using
> >> >>> configs).
> >> >>>
> >> >>> These are valid points. The reasons why we should prefer an
> interface
> >> >> (the current proposal is using the ReplicationPolicy interface which
> >> >> already exists in MM2) instead are
> >> >>
> >> >> 1. the number of configurations that MM2 has. Right now MM2 has its
> own
> >> >> set of configuration in addition to configuration for admin,
> consumer and
> >> >> producer clients and Connect API. And these configurations in some
> >> >> use-cases could be different based on the herder.
> >> >>
> >> >> Consider a use case where MM2 is used to mirror between a set of
> clusters
> >> >> running by different teams and have different naming policies. So if
> we are
> >> >> using 3 configurations for internal topics for a use case like below
> the
> >> >> configuration will be like this. If the number of policies grows, the
> >> >> amount of configuration can get unwieldy.
> >> >>
> >> >> clusters = newCenterCluster, teamACluster, teamBCluster, ...
> >> >>
> >> >> //newCenterCluster policy is .
> >> >> //teamACluster naming policy is _ when move to
> newCenterCluster it will be teamA._
> >> >> //teamBCluster naming policy is . when move to
> newCenterCluster it will be teamB._
> >> >>
> >> >> //The goal is to move all topics from team-specific cluster to one
> new cluster
> >> >> // where the org can unify resource management and naming conventions
> >> >>
> >> >> replication.policy.class=MyCustomReplicationPolicy
> >> >>
> >> >> teamACluster.heartbeat.topic=mm2_heartbeat_topic // created on
> source cluster
> >> >>
> 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #278

2021-07-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 598182 lines...]
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > testExistsExistingZNode() 
PASSED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testZooKeeperStateChangeRateMetrics() STARTED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testZooKeeperStateChangeRateMetrics() PASSED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() STARTED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() PASSED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > testGetAclNonExistentZNode() 
STARTED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > testGetAclNonExistentZNode() 
PASSED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() STARTED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() PASSED
[2021-07-02T11:06:49.663Z] 
[2021-07-02T11:06:49.663Z] DelegationTokenManagerTest > 
testPeriodicTokenExpiry() STARTED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > 
testPeriodicTokenExpiry() PASSED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled() STARTED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled() PASSED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > testDescribeToken() 
STARTED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > testDescribeToken() 
PASSED
[2021-07-02T11:06:50.710Z] 
[2021-07-02T11:06:50.710Z] DelegationTokenManagerTest > testCreateToken() 
STARTED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testCreateToken() PASSED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testExpireToken() 
STARTED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testExpireToken() PASSED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testRenewToken() STARTED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testRenewToken() PASSED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testRemoveTokenHmac() 
STARTED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] DelegationTokenManagerTest > testRemoveTokenHmac() 
PASSED
[2021-07-02T11:06:51.764Z] 
[2021-07-02T11:06:51.764Z] AclAuthorizerWithZkSaslTest > 
testAclUpdateWithSessionExpiration() STARTED
[2021-07-02T11:06:52.446Z] 
[2021-07-02T11:06:52.446Z] ZooKeeperClientTest > testConnectionTimeout() PASSED
[2021-07-02T11:06:52.446Z] 
[2021-07-02T11:06:52.446Z] ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() STARTED
[2021-07-02T11:06:52.957Z] 
[2021-07-02T11:06:52.957Z] AclAuthorizerWithZkSaslTest > 
testAclUpdateWithSessionExpiration() PASSED
[2021-07-02T11:06:52.957Z] 
[2021-07-02T11:06:52.957Z] AclAuthorizerWithZkSaslTest > 
testAclUpdateWithAuthFailure() STARTED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler() PASSED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > 
testUnresolvableConnectString() STARTED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > 
testUnresolvableConnectString() PASSED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > 
testGetChildrenNonExistentZNode() STARTED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > 
testGetChildrenNonExistentZNode() PASSED
[2021-07-02T11:06:53.507Z] 
[2021-07-02T11:06:53.507Z] ZooKeeperClientTest > testPipelinedGetData() STARTED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > testPipelinedGetData() PASSED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() STARTED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > 
testZNodeChildChangeHandlerForChildChange() PASSED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() STARTED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > 
testGetChildrenExistingZNodeWithChildren() PASSED
[2021-07-02T11:06:54.531Z] 
[2021-07-02T11:06:54.531Z] ZooKeeperClientTest > testSetDataExistingZNode() 
STARTED
[2021-07-02T11:06:54.531Z] 

[jira] [Created] (KAFKA-13027) Support for Jakarta EE 9.x to allow applications to migrate

2021-07-02 Thread Frode Carlsen (Jira)
Frode Carlsen created KAFKA-13027:
-

 Summary: Support for Jakarta EE 9.x to allow applications to 
migrate
 Key: KAFKA-13027
 URL: https://issues.apache.org/jira/browse/KAFKA-13027
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Frode Carlsen


Some of the kafka libraries (such as connect-api) have direct dependencies on 
older Java EE 8 specifications (e.g. javax.ws.rs:javax.ws.rs-api:2.1.1).

This creates issues in environments upgrading to Jakarta 9.0 and beyond (9.1 
requires minimum Java 11).  For example upgrading web application servers such 
as migrating to Jetty 11.

The  main thing preventing backwards compatibility is that the package 
namespace has moved from "*javax.**" to "*jakarta.**", along with a few 
namespace changes in XML configuration files. (new specifications are published 
here [https://jakarta.ee/specifications/,] along with references to official 
artifacts and compliant implementations).

>From KAFKA-12894 (KIP-705) it appears dropping support for java 8 won't happen 
>till Q4 2022, which makes it harder to migrate to Jakarta 9.1, but 9.0 is 
>still Java 8 compatible.

Therefore, to allow projects that use Kafka client libraries to migrate prior 
to the full work being completed in a future Kafka version, would it be 
possible to generate Jakarta 9 compatible artifacts and dual publish these for 
libraries that now depend on javax.ws.rs / javax.servlet and similar? This is 
done by a number of open source libraries, as an alternative to having 
different release branches for the time being.   Other than the namespace 
change in 9.0 and minimum java LTS version in 9.1, the apis are fully 
compatible with Java EE 8.

As a suggestion, this fairly easy to do automaticallly using the 
[https://github.com/eclipse/transformer/] for migration (most projects end up 
publishing under artifacts with a either "-jakarta" as a suffix on the 
artifactId or classifier)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)