[jira] [Commented] (CAMEL-19894) camel-kafka: enabling "breakOnFirstError" causes to skip records on exception

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-19894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783836#comment-17783836
 ] 

Mike Barlotta commented on CAMEL-19894:
---

The PRs associated with CAMEL-20044 seem to fix this 

> camel-kafka: enabling "breakOnFirstError" causes to skip records on exception
> -
>
> Key: CAMEL-19894
> URL: https://issues.apache.org/jira/browse/CAMEL-19894
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0, 4.0.0
>Reporter: akrivda
>Priority: Minor
>  Labels: help-wanted
>
> {*}Reproducing{*}:
>  * Configure camel kafka consumer with with "breakOnFirstError" = "true"
>  * Setup a topic with exactly 2 partitions
>  * Produce a series of records to kafka record to both partitions.
>  * Ensure offset is commited (I've done that with manual commit, autocommit 
> *MAY* have a second bug also, check the description)
>  * Make a route to consume this topic. Ensure the first poll gets records 
> from both partitions. Ensure the second-to-consume partition has some more 
> records to fetch in the next poll.
>  * Trigger an error when processing exactly first record of the 
> second-to-consume partition
> *Expected behavior:*
>  * Application should consume all records from the first partition, and none 
> from the second. 
> *Actual behavior:*
>  * Application should consume all records from the first partition. Some 
> records from the second partition are skipped (the number depends on quantity 
> consumed from the first in a single poll).  
>  
> This bug was introduced in https://issues.apache.org/jira/browse/CAMEL-18350, 
> which had fixed a major issue with breakOnFirstError, but had some edge cases.
> The root cause is that lastResult variable is not cleaned between polls (and 
> between partitions loop iterations), and might have an invalid dirty value 
> got from the previous iteration. And it has no chance to be correctly 
> initialized if exception happens on the first record of partition. Then 
> forced sync commit is done to the right (new) partition but with invalid 
> "random" (dirty) offset.
> I've adjusted a project test project for CAMEL-18350 (many thanks to 
> [~klease78]) to demonstrate the issue and published it to github. Check the 
> failing test in the project: 
> [https://github.com/Krivda/camel-bug-reproduction]
> P.S. Also, there *might* be a second bug related to this issue which *may* 
> occur with enableAutoCommit=true : when the bug occurs, physical commit 
> *might* be not made to already processed partitions, which may result in 
> double processing. But i haven't investigated this issue further. 
> P.P.S - Please note, that the github project contains a very detailed 
> description of the behavior pointing to the specific failing lines of code, 
> that should be very helpful in investigation.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ https://issues.apache.org/jira/browse/CAMEL-20044 ]


Mike Barlotta deleted comment on CAMEL-20044:
---

was (Author: g1antfan):
Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2

*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/foo:/Users/foo
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 4:56 PM:


Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2

*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/foo:/Users/foo
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully


was (Author: g1antfan):
Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2

*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/foo:/Users/4741446
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 4:56 PM:


Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2

*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/foo:/Users/4741446
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully


was (Author: g1antfan):
Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2



*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/4741446:/Users/4741446
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 4:55 PM:


Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

*podman machine info*
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2



*podman machine start*
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/4741446:/Users/4741446
API forwarding listening on: /var/run/docker.sock
Docker API clients default to this address. You do not need to set DOCKER_HOST.

Machine "podman-machine-default" started successfully


was (Author: g1antfan):
Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

podman machine info
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 4:53 PM:


Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

podman machine info
Host:
  Arch: arm64
  CurrentMachine: ""
  DefaultMachine: ""
  EventsDir: 
/var/folders/82/qc43__sx1qg1wqrtl2csq6r8gq/T/podman-run--1/podman
  MachineConfigDir: /Users/foo/.config/containers/podman/machine/qemu
  MachineImageDir: /Users/foo/.local/share/containers/podman/machine/qemu
  MachineState: ""
  NumberOfMachines: 1
  OS: darwin
  VMType: qemu
Version:
  APIVersion: 4.6.2
  Built: 1693234503
  BuiltTime: Mon Aug 28 10:55:03 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178
  GoVersion: go1.21.0
  Os: darwin
  OsArch: darwin/arm64
  Version: 4.6.2


was (Author: g1antfan):
Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 4:52 PM:


Any tips or tricks you can share?

I am using podman 4.6.2
I have Docker socket compatibility enabled


was (Author: g1antfan):
Any tips or tricks you can share?

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783699#comment-17783699
 ] 

Mike Barlotta commented on CAMEL-20044:
---

Any tips or tricks you can share?

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783671#comment-17783671
 ] 

Mike Barlotta commented on CAMEL-20044:
---

[~orpiske] 

QQ
Was trying to get integration test up and running for the fix
However, it seems that the IT in Camel are using test containers . 


https://java.testcontainers.org/error_missing_container_runtime_environment/


Unfortunately, my employer doesn't allow us to use Docker Desktop. 
We are using Podman. I've tried to get that working but keep getting "Container 
startup failed"

Any issue with using Spring's _EmbeddedKafka?_
That is what several of the sample apps have used to demonstrate the issues

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20089) camel-kafka: make breakOnFirstError more flexible

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783629#comment-17783629
 ] 

Mike Barlotta edited comment on CAMEL-20089 at 11/7/23 1:43 PM:


I do think it is important for Camel to have a flag to allow a route stop 
moving forward thru the messages in a partition when an error occurs (or to 
keep going). This flag does that. What is interesting is that it tries to do 
more than that. 

One observation regarding _breakOnFirstError_ is that it does attempt to retry 
the message automatically (at least once). In the current implementation there 
doesn't seem to be a way to override that behavior and it seems to have created 
several issues.

I propose that if  _breakOnFirstError && allowManualCommit_ that Camel would do 
just unsubscribe and resubscribe. This would delegate how to handle the error 
(retry or no retry) to the Camel route and how the implementation commits or 
doesn't commit the offset. 

Perhaps a 2nd flag ({_}breakOnFirstErrorWithRetry{_}) could be added if that 
isn't the way to go. Then the _KafkaRecordProcessor_ could do a check like 
_breakOnFirstError &&_ _breakOnFirstErrorWithRetry_ before forcing the commit 
which causes the retry


was (Author: g1antfan):
I do think it is important for Camel to have a flag to allow a route stop 
moving forward thru the messages in a partition when an error occurs (or to 
keep going). This flag does that. What is interesting is that it tries to do 
more than that. 

One observation regarding _breakOnFirstError_ is that it does attempt to retry 
the message automatically (at least once). In the current implementation there 
doesn't seem to be a way to override that behavior and it seems to have created 
several issues.

I propose that if  _breakOnFirstError && allowManualCommit_ that Camel would do 
just unsubscribe and resubscribe. This would delegate how to handle the error 
(retry or no retry) to the Camel route and how the implementation commits or 
doesn't commit the offset. 

> camel-kafka: make breakOnFirstError more flexible
> -
>
> Key: CAMEL-20089
> URL: https://issues.apache.org/jira/browse/CAMEL-20089
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-kafka
>Reporter: Otavio Rodolfo Piske
>Assignee: Otavio Rodolfo Piske
>Priority: Major
> Fix For: 4.x
>
>
> We have a very high incidence of problems in the camel-kafka component that 
> are related to the breakOnFirstError flag.
> Looking at the tickets related to this issue it seems to me that different 
> uses have different expectations about how the component should behave in 
> terms of polling, rolling back, and/or future processing. 
> In short: this flag is leading to a lot of confusion and we should 
> investigate how we can flexibilize the behavior of the Kafka component under 
> those circumstances and let the users choose more freely the behavior that is 
> suitable to their needs. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783622#comment-17783622
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 1:45 PM:


[~orpiske] 

Would you like me to create a PR with the fix?
The current PR is just the logging and the fix is commented out for now.


It seems this helps w/ 2 existing issues and at our company there is some 
concern about moving some applications forward w/ this issue.

It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

I am happy to take a look at some of the other ways _breakOnFirstError_ could 
work as well


was (Author: g1antfan):
[~orpiske] 

Would you like me to create a PR with the fix?
It seems this helps w/ 2 existing issues and at our company there is some 
concern about moving some applications forward w/ this issue.

It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

I am happy to take a look at some of the other ways _breakOnFirstError_ could 
work as well

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20089) camel-kafka: make breakOnFirstError more flexible

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783629#comment-17783629
 ] 

Mike Barlotta commented on CAMEL-20089:
---

I do think it is important for Camel to have a flag to allow a route stop 
moving forward thru the messages in a partition when an error occurs (or to 
keep going). This flag does that. What is interesting is that it tries to do 
more than that. 

One observation regarding _breakOnFirstError_ is that it does attempt to retry 
the message automatically (at least once). In the current implementation there 
doesn't seem to be a way to override that behavior and it seems to have created 
several issues.

I propose that if  _breakOnFirstError && allowManualCommit_ that Camel would do 
just unsubscribe and resubscribe. This would delegate how to handle the error 
(retry or no retry) to the Camel route and how the implementation commits or 
doesn't commit the offset. 

> camel-kafka: make breakOnFirstError more flexible
> -
>
> Key: CAMEL-20089
> URL: https://issues.apache.org/jira/browse/CAMEL-20089
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-kafka
>Reporter: Otavio Rodolfo Piske
>Assignee: Otavio Rodolfo Piske
>Priority: Major
> Fix For: 4.x
>
>
> We have a very high incidence of problems in the camel-kafka component that 
> are related to the breakOnFirstError flag.
> Looking at the tickets related to this issue it seems to me that different 
> uses have different expectations about how the component should behave in 
> terms of polling, rolling back, and/or future processing. 
> In short: this flag is leading to a lot of confusion and we should 
> investigate how we can flexibilize the behavior of the Kafka component under 
> those circumstances and let the users choose more freely the behavior that is 
> suitable to their needs. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783622#comment-17783622
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 1:14 PM:


[~orpiske] 

Would you like me to create a PR with the fix?
It seems this helps w/ 2 existing issues and at our company there is some 
concern about moving some applications forward w/ this issue.


It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue


was (Author: g1antfan):
[~orpiske] 

Would you like me to create a PR with the fix?
It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783622#comment-17783622
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/7/23 1:15 PM:


[~orpiske] 

Would you like me to create a PR with the fix?
It seems this helps w/ 2 existing issues and at our company there is some 
concern about moving some applications forward w/ this issue.

It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

I am happy to take a look at some of the other ways _breakOnFirstError_ could 
work as well


was (Author: g1antfan):
[~orpiske] 

Would you like me to create a PR with the fix?
It seems this helps w/ 2 existing issues and at our company there is some 
concern about moving some applications forward w/ this issue.


It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-07 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783622#comment-17783622
 ] 

Mike Barlotta commented on CAMEL-20044:
---

[~orpiske] 

Would you like me to create a PR with the fix?
It will take me some time to get adequate tests in place.
I had been doing local builds and running my mini app as well as the app from 
the other issue

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783355#comment-17783355
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 9:23 PM:


In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong (as described above) 
but now we are not using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_
 * that was added as part of fix in CAMEL-18350

 

_UPDATE: I ran the sample provided with CAMEL-19894_
 * _ran 2x without the fix and test failed_
 * _ran 3x with this fix and passed each time_


was (Author: g1antfan):
In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong (as described above) 
but now we are not using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_

 

_UPDATE: I ran the sample provided with CAMEL-19894_
 * _ran 2x without the fix and test failed_
 * _ran 3x with this fix and passed each time_

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783355#comment-17783355
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 8:22 PM:


In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong (as described above) 
but now we are not using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_

 

_UPDATE: I ran the sample provided with CAMEL-19894_
 * _ran 2x without the fix and test failed_
 * _ran 3x with this fix and passed each time_


was (Author: g1antfan):
In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong (as described above) 
but now we are not using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783355#comment-17783355
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 8:15 PM:


In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong (as described above) 
but now we are not using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_


was (Author: g1antfan):
In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong but now we are not 
using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783355#comment-17783355
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 8:01 PM:


In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong but now we are not 
using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

if we go with this fix, may want to evaluate whether we need to pass along 
_IastResult_


was (Author: g1antfan):
In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong but now we are not 
using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783278#comment-17783278
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 8:00 PM:


Got some time and I've added some extra logging statements to 
_KafkaRecordProcessor_ in the 3.21.x branch

in this run there are 3 consumer threads
 * thread 1 processes partition 2
 ** handles offset 0 and 1 fine
 ** 2:2 has an error
 * thread 2 processes partition 1
 ** handles offset 0 and 1 fine
 * thread 3 processes partition 0
 ** handles offset 0, 1, 2 fine
 ** 0:3 has an error

thread 1 and 3 both unsubscribe

 
{code:java}
2023-11-06 | 10:23:12.774 | WARN  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset 2 on partition 0 
and start polling again.
2023-11-06 | 10:23:12.782 | TRACE | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 0 and offset 2
2023-11-06 | 10:23:12.784 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 2 and offset 1
2023-11-06 | 10:23:12.784 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 2 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 0 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka {code}
The last we heard from thread #2 is that it had processed 1:1 and was manually 
committing

 

 
{code:java}
2023-11-06 | 10:23:12.636 | INFO  | [Camel (camel-1) thread #2 - 
KafkaConsumer[foobarTopic]] | c.c.k.KafkaOffsetManagerProcessor 
(KafkaOffsetManagerProcessor.java:49) | manually committing the offset for batch
Message consumed from foobarTopic
The Partition:Offset is 1:1
The Key is null
10 {code}
*The _lastResult_ of 1:1 on thread #2 will end up causing the problem*

 

When Camel resubscribes we get this
 * thread 1 processes partition 2
 ** based on Camel logic it reprocesses 2:2
 ** 2:2 has an error

The other threads don't have a chance to do anything before Camel unsubscribes 
{code:java}
2023-11-06 | 10:23:15.421 | WARN  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset -1 on partition 
-1 and start polling again.
2023-11-06 | 10:23:15.421 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition -1 and offset -1
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:15.421 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35)

[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783355#comment-17783355
 ] 

Mike Barlotta commented on CAMEL-20044:
---

In the _KafkaRecordProcessor_ it seems that using _record.offset_ for the 
forceCommit is avoiding the problem in this issue (line #158 in PR)
 * note: can still result in the _lastResult_ being wrong but now we are not 
using it to force a commit
 ** when _lastResult_ is wrong then the message with the error is tried more 
than 1x

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783278#comment-17783278
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 5:49 PM:


Got some time and I've added some extra logging statements to 
_KafkaRecordProcessor_ in the 3.21.x branch

in this run there are 3 consumer threads
 * thread 1 processes partition 2
 ** handles offset 0 and 1 fine
 ** 2:2 has an error
 * thread 2 processes partition 1
 ** handles offset 0 and 1 fine
 * thread 3 processes partition 0
 ** handles offset 0, 1, 2 fine
 ** 0:3 has an error

thread 1 and 3 both unsubscribe

 
{code:java}
2023-11-06 | 10:23:12.774 | WARN  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset 2 on partition 0 
and start polling again.
2023-11-06 | 10:23:12.782 | TRACE | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 0 and offset 2
2023-11-06 | 10:23:12.784 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 2 and offset 1
2023-11-06 | 10:23:12.784 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 2 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 0 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka {code}
The last we heard from thread #2 is that it had processed 1:1 and was manually 
committing

 

 
{code:java}
2023-11-06 | 10:23:12.636 | INFO  | [Camel (camel-1) thread #2 - 
KafkaConsumer[foobarTopic]] | c.c.k.KafkaOffsetManagerProcessor 
(KafkaOffsetManagerProcessor.java:49) | manually committing the offset for batch
Message consumed from foobarTopic
The Partition:Offset is 1:1
The Key is null
10 {code}
*The _lastResult_ of 1:1 on thread #2 will end up causing the problem*

 

When Camel resubscribes we get this
 * thread 1 processes partition 2
 ** based on Camel logic it reprocesses 2:2
 ** 2:2 has an error

The other threads don't have a chance to do anything before Camel unsubscribes 
{code:java}
2023-11-06 | 10:23:15.421 | WARN  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset -1 on partition 
-1 and start polling again.
2023-11-06 | 10:23:15.421 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition -1 and offset -1
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:15.421 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35)

[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783278#comment-17783278
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/6/23 5:48 PM:


Got some time and I've added some extra logging statements to 
_KafkaRecordProcessor_ in the 3.21.x branch

in this run there are 3 consumer threads
 * thread 1 processes partition 2
 ** handles offset 0 and 1 fine
 ** 2:2 has an error
 * thread 2 processes partition 1
 ** handles offset 0 and 1 fine
 * thread 3 processes partition 0
 ** handles offset 0, 1, 2 fine
 ** 0:3 has an error

thread 1 and 3 both unsubscribe

 
{code:java}
2023-11-06 | 10:23:12.774 | WARN  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset 2 on partition 0 
and start polling again.
2023-11-06 | 10:23:12.782 | TRACE | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 0 and offset 2
2023-11-06 | 10:23:12.784 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 2 and offset 1
2023-11-06 | 10:23:12.784 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 2 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 0 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka {code}
The last we heard from thread #2 is that it had processed 1:1 and was manually 
committing

 

 
{code:java}
2023-11-06 | 10:23:12.636 | INFO  | [Camel (camel-1) thread #2 - 
KafkaConsumer[foobarTopic]] | c.c.k.KafkaOffsetManagerProcessor 
(KafkaOffsetManagerProcessor.java:49) | manually committing the offset for batch
Message consumed from foobarTopic
The Partition:Offset is 1:1
The Key is null
10 {code}
*The _lastResult_ of 1:1 on thread #2 will end up causing the problem*

 

When Camel resubscribes we get this
 * thread 1 processes partition 2
 ** based on Camel logic it reprocesses 2:2
 ** 2:2 has an error

The other threads don't have a chance to do anything before Camel unsubscribes 
{code:java}
2023-11-06 | 10:23:15.421 | WARN  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset -1 on partition 
-1 and start polling again.
2023-11-06 | 10:23:15.421 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition -1 and offset -1
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:15.421 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35)

[jira] [Updated] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-20044:
--
Attachment: camel-kafka-offset.11-06-2023.log

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
> Attachments: camel-kafka-offset.11-06-2023.log
>
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783278#comment-17783278
 ] 

Mike Barlotta commented on CAMEL-20044:
---

Got some time and I've added some extra logging statements to 
_KafkaRecordProcessor_ in the 3.21.x branch


in this run there are 3 consumer threads
 * thread 1 processes partition 2
 ** handles offset 0 and 1 fine
 ** 2:2 has an error
 * thread 2 processes partition 1
 ** handles offset 0 and 1 fine
 * thread 3 processes partition 0
 ** handles offset 0, 1, 2 fine
 ** 0:3 has an error

thread 1 and 3 both unsubscribe

 
{code:java}
2023-11-06 | 10:23:12.774 | WARN  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset 2 on partition 0 
and start polling again.
2023-11-06 | 10:23:12.782 | TRACE | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 0 and offset 2
2023-11-06 | 10:23:12.784 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition 2 and offset 1
2023-11-06 | 10:23:12.784 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 2 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 0 from topic 
foobarTopic is enabled via Kafka consumer (NO-OP)
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #3 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka
2023-11-06 | 10:23:12.785 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:468) | Unsubscribing from Kafka {code}
The last we heard from thread #2 is that it had processed 1:1 and was manually 
committing

 

 
{code:java}
2023-11-06 | 10:23:12.636 | INFO  | [Camel (camel-1) thread #2 - 
KafkaConsumer[foobarTopic]] | c.c.k.KafkaOffsetManagerProcessor 
(KafkaOffsetManagerProcessor.java:49) | manually committing the offset for batch
Message consumed from foobarTopic
The Partition:Offset is 1:1
The Key is null
10 {code}
*The _lastResult_ of 1:1 on thread #2 will end up causing the problem*

 

When Camel resubscribes we get this
 * thread 1 processes partition 2
 ** based on Camel logic it reprocesses 2:2
 ** 2:2 has an error

The other threads don't have a chance to do anything before Camel unsubscribes 
{code:java}
2023-11-06 | 10:23:15.421 | WARN  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.c.s.KafkaRecordProcessor 
(KafkaRecordProcessor.java:144) | Will seek consumer to offset -1 on partition 
-1 and start polling again.
2023-11-06 | 10:23:15.421 | TRACE | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:354) | the polling iteration had a result returned for 
partition -1 and offset -1
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:366) | We hit an error ... setting flags to force 
reconnect
2023-11-06 | 10:23:15.421 | DEBUG | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.kafka.KafkaFetchRecords 
(KafkaFetchRecords.java:382) | Not reconnecting, check whether to auto-commit 
or not ...
2023-11-06 | 10:23:15.421 | INFO  | [Camel (camel-1) thread #1 - 
KafkaConsumer[foobarTopic]] | o.a.c.c.k.consumer.NoopCommitManager 
(NoopCommitManager.java:35) | Auto commit on foobarTopic-Thread 0 from topi

[jira] [Updated] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-20044:
--
Description: 
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * [https://github.com/CodeSmell/CamelKafkaOffset]

This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-14935
 * CAMEL-17925
 * CAMEL-18350
 * CAMEL-19894

  was:
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * [https://github.com/CodeSmell/CamelKafkaOffset]

This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-17925
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894


> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-17925
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-06 Thread Mike Barlotta (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-20044:
--
Description: 
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * [https://github.com/CodeSmell/CamelKafkaOffset]

This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-17925
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894

  was:
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * [https://github.com/CodeSmell/CamelKafkaOffset]

This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894


> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-17925
>  * CAMEL-14935
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-11-01 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780420#comment-17780420
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 11/1/23 1:13 PM:


Downgraded to Camel 3.14.5, a version prior to the fix of CAMEL-18350 
The processing of the payloads looks like this
 * Consumed NORETRY-ERROR 2 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

What is interesting here is that the revoke from the consumer group is not 
logged, nor is there a seek. 

This behavior is different than the way these messages are being processed in 
3.21
 * Consumed NORETRY-ERROR 4 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

A scan of the various releases and related issues suggests that behavior was 
changed based on this issue
 * CAMEL-17925

The [3.14 documentation 
|https://camel.apache.org/components/3.14.x/kafka-component.html]has this for 
`breakOnFirstError`
_This options controls what happens when a consumer is processing an exchange 
and it fails. If the option is false then the consumer continues to the next 
message and processes it. If the option is true then the consumer breaks out, 
and will seek back to offset of the message that caused a failure, and then 
re-attempt to process this message. However this can lead to endless processing 
of the same message if its bound to fail every time, eg a poison message. 
Therefore its recommended to deal with that for example by using Camel’s error 
handler._

I could not find older documentation to see how the documentation described the 
behavior prior to that release.

One other observation, running the test provided with a RETRY error instead on 
NONRETRY, using 14.5 does result in that payload NOT being retried.

Wondering if _breakOnFirstError_ (when true) should break out and then seek 
back to the last committed offset (instead of the offset on the 
{_}lastResult{_}). In the test app provided that would mean that a NORETRY 
would not be processed again (b/c we committed the offset). However a RETRY 
would be processed repeatedly (b/c we had not committed the offset). It likley 
means that a batch would be replayed in its entirety leaving the Camel app to 
have to handle the messages were already processed successfully.

Any thoughts?

 


was (Author: g1antfan):
Downgraded to Camel 3.14.5, a version prior to the fix of CAMEL-18350 
The processing of the payloads looks like this
 * Consumed NORETRY-ERROR 2 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

What is interesting here is that the revoke from the consumer group is not 
logged, nor is there a seek. 

This behavior is different than the way these messages are being processed in 
3.21
 * Consumed NORETRY-ERROR 4 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

A scan of the various releases and related issues suggests that behavior was 
changed based on this issue
 * CAMEL-17925

The [3.14 documentation 
|https://camel.apache.org/components/3.14.x/kafka-component.html]has this for 
`breakOnFirstError`
_This options controls what happens when a consumer is processing an exchange 
and it fails. If the option is false then the consumer continues to the next 
message and processes it. If the option is true then the consumer breaks out, 
and will seek back to offset of the message that caused a failure, and then 
re-attempt to process this message. However this can lead to endless processing 
of the same message if its bound to fail every time, eg a poison message. 
Therefore its recommended to deal with that for example by using Camel’s error 
handler._

I could not find older documentation to see how the documentation described the 
behavior prior to that release.

One other observation, running the test provided with a RETRY error instead on 
NONRETRY, using 14.5 does result in that payload NOT being retried.

Wondering if _breakOnFirstError_ (when true) should break out and then seek 
back to the last committed offset (instead of the offset on the 
{_}lastResult{_}). In the test app provided that would mean that a NORETRY 
would not be processed again (b/c we committed the offset). However a RETRY 
would be processed repeatedly (b/c we had not committed the offset). 

Any thoughts?

 

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.

[jira] [Comment Edited] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-10-27 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780420#comment-17780420
 ] 

Mike Barlotta edited comment on CAMEL-20044 at 10/27/23 7:01 PM:
-

Downgraded to Camel 3.14.5, a version prior to the fix of CAMEL-18350 
The processing of the payloads looks like this
 * Consumed NORETRY-ERROR 2 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

What is interesting here is that the revoke from the consumer group is not 
logged, nor is there a seek. 

This behavior is different than the way these messages are being processed in 
3.21
 * Consumed NORETRY-ERROR 4 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

A scan of the various releases and related issues suggests that behavior was 
changed based on this issue
 * CAMEL-17925

The [3.14 documentation 
|https://camel.apache.org/components/3.14.x/kafka-component.html]has this for 
`breakOnFirstError`
_This options controls what happens when a consumer is processing an exchange 
and it fails. If the option is false then the consumer continues to the next 
message and processes it. If the option is true then the consumer breaks out, 
and will seek back to offset of the message that caused a failure, and then 
re-attempt to process this message. However this can lead to endless processing 
of the same message if its bound to fail every time, eg a poison message. 
Therefore its recommended to deal with that for example by using Camel’s error 
handler._

I could not find older documentation to see how the documentation described the 
behavior prior to that release.

One other observation, running the test provided with a RETRY error instead on 
NONRETRY, using 14.5 does result in that payload NOT being retried.

Wondering if _breakOnFirstError_ (when true) should break out and then seek 
back to the last committed offset (instead of the offset on the 
{_}lastResult{_}). In the test app provided that would mean that a NORETRY 
would not be processed again (b/c we committed the offset). However a RETRY 
would be processed repeatedly (b/c we had not committed the offset). 

Any thoughts?

 


was (Author: g1antfan):
Downgraded to Camel 3.14.5, a version prior to the fix of CAMEL-18350 
The processing of the payloads looks like this
 * Consumed NORETRY-ERROR 2 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

What is interesting here is that the revoke from the consumer group is not 
logged, nor is there a seek. 

This behavior is different than the way these messages are being processed in 
3.21
 * Consumed NORETRY-ERROR 4 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

A scan of the various releases and related issues suggests that behavior was 
changed based on this issue
 * CAMEL-17925

The [3.14 documentation 
|https://camel.apache.org/components/3.14.x/kafka-component.html]has this for 
`breakOnFirstError`
_This options controls what happens when a consumer is processing an exchange 
and it fails. If the option is false then the consumer continues to the next 
message and processes it. If the option is true then the consumer breaks out, 
and will seek back to offset of the message that caused a failure, and then 
re-attempt to process this message. However this can lead to endless processing 
of the same message if its bound to fail every time, eg a poison message. 
Therefore its recommended to deal with that for example by using Camel’s error 
handler._

I could not find older documentation to see how the documentation described the 
behavior prior to that release.

One other observation, running the test provided with a RETRY error instead on 
NONRETRY, using 14.5 does result in that payload NOT being retried.

Wondering if _breakOnFirstError_ (when true) should break out and then seek 
back to the last committed offset. In the test app provided that would mean 
that a NORETRY would not be processed again (b/c we committed the offset). 
However a RETRY would be processed repeatedly (b/c we had not committed the 
offset). 

Any thoughts?

 

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** aut

[jira] [Commented] (CAMEL-20044) camel-kafka - On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-10-27 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780420#comment-17780420
 ] 

Mike Barlotta commented on CAMEL-20044:
---

Downgraded to Camel 3.14.5, a version prior to the fix of CAMEL-18350 
The processing of the payloads looks like this
 * Consumed NORETRY-ERROR 2 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

What is interesting here is that the revoke from the consumer group is not 
logged, nor is there a seek. 

This behavior is different than the way these messages are being processed in 
3.21
 * Consumed NORETRY-ERROR 4 times
 * Consumed 1 1 times
 * Consumed 2 1 times
 * ...
 * Consumed 11 1 times

A scan of the various releases and related issues suggests that behavior was 
changed based on this issue
 * CAMEL-17925

The [3.14 documentation 
|https://camel.apache.org/components/3.14.x/kafka-component.html]has this for 
`breakOnFirstError`
_This options controls what happens when a consumer is processing an exchange 
and it fails. If the option is false then the consumer continues to the next 
message and processes it. If the option is true then the consumer breaks out, 
and will seek back to offset of the message that caused a failure, and then 
re-attempt to process this message. However this can lead to endless processing 
of the same message if its bound to fail every time, eg a poison message. 
Therefore its recommended to deal with that for example by using Camel’s error 
handler._

I could not find older documentation to see how the documentation described the 
behavior prior to that release.

One other observation, running the test provided with a RETRY error instead on 
NONRETRY, using 14.5 does result in that payload NOT being retried.

Wondering if _breakOnFirstError_ (when true) should break out and then seek 
back to the last committed offset. In the test app provided that would mean 
that a NORETRY would not be processed again (b/c we committed the offset). 
However a RETRY would be processed repeatedly (b/c we had not committed the 
offset). 

Any thoughts?

 

> camel-kafka - On rejoining consumer group Camel can set offset incorrectly 
> causing messages to be replayed
> --
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CAMEL-20044) On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-10-24 Thread Mike Barlotta (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-20044:
--
Environment: 
* Rocky Linux 8.7
 * Open JDK 11.0.8
 * Camel 3.21.0
 * Spring Boot 2.7.14
 * Strimzi Kafka 0.28.0/3.0.0

  was:
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * https://github.com/CodeSmell/CamelKafkaOffset


This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894


> On rejoining consumer group Camel can set offset incorrectly causing messages 
> to be replayed
> 
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: * Rocky Linux 8.7
>  * Open JDK 11.0.8
>  * Camel 3.21.0
>  * Spring Boot 2.7.14
>  * Strimzi Kafka 0.28.0/3.0.0
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CAMEL-20044) On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-10-24 Thread Mike Barlotta (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-20044:
--
Description: 
{*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * [https://github.com/CodeSmell/CamelKafkaOffset]

This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894

> On rejoining consumer group Camel can set offset incorrectly causing messages 
> to be replayed
> 
>
> Key: CAMEL-20044
> URL: https://issues.apache.org/jira/browse/CAMEL-20044
> Project: Camel
>  Issue Type: Bug
>  Components: camel-kafka
>Affects Versions: 3.21.0
> Environment: {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * https://github.com/CodeSmell/CamelKafkaOffset
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-18350
>  * CAMEL-19894
>Reporter: Mike Barlotta
>Priority: Major
>
> {*}Reproducing (intermittent){*}:
>  * Configure camel kafka consumer with following:
>  ** autoCommitEnable = false
>  ** allowManualCommit = true
>  ** autoOffsetReset = earliest
>  ** maxPollRecords = 1
>  ** breakOnFirstError = true
>  * Produce a series of records to kafka record to both partitions.
>  * Throw an exception that is unhandled
>  * commit the offset in the onException block
> *Expected behavior:*
>  * Application should consume the record 1 more time, then move on to the 
> next offset in the partition
> *Actual behavior:*
>  * Application will often work. Occasionally will use the offset from another 
> partition and assign that to the partition where the record failed. This can 
> then result in the consumer replaying messages instead of moving forward.
> I put together a sample that can recreate the error. However, given its 
> intermittent nature it may not fail on each run. I have included the logs 
> from 3 different runs on my laptop from this test. Two of them show the error 
> occurring. One of the them has a successful run. I have also provided more 
> details in the README. 
>  * [https://github.com/CodeSmell/CamelKafkaOffset]
> This seems related to other issues with how Camel processes the 
> _breakOnFirstError_ attribute. 
>  * CAMEL-14935
>  * CAMEL-18350
>  * CAMEL-19894



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CAMEL-20044) On rejoining consumer group Camel can set offset incorrectly causing messages to be replayed

2023-10-24 Thread Mike Barlotta (Jira)
Mike Barlotta created CAMEL-20044:
-

 Summary: On rejoining consumer group Camel can set offset 
incorrectly causing messages to be replayed
 Key: CAMEL-20044
 URL: https://issues.apache.org/jira/browse/CAMEL-20044
 Project: Camel
  Issue Type: Bug
  Components: camel-kafka
Affects Versions: 3.21.0
 Environment: {*}Reproducing (intermittent){*}:
 * Configure camel kafka consumer with following:
 ** autoCommitEnable = false
 ** allowManualCommit = true
 ** autoOffsetReset = earliest
 ** maxPollRecords = 1
 ** breakOnFirstError = true
 * Produce a series of records to kafka record to both partitions.
 * Throw an exception that is unhandled
 * commit the offset in the onException block

*Expected behavior:*
 * Application should consume the record 1 more time, then move on to the next 
offset in the partition

*Actual behavior:*
 * Application will often work. Occasionally will use the offset from another 
partition and assign that to the partition where the record failed. This can 
then result in the consumer replaying messages instead of moving forward.

I put together a sample that can recreate the error. However, given its 
intermittent nature it may not fail on each run. I have included the logs from 
3 different runs on my laptop from this test. Two of them show the error 
occurring. One of the them has a successful run. I have also provided more 
details in the README. 
 * https://github.com/CodeSmell/CamelKafkaOffset


This seems related to other issues with how Camel processes the 
_breakOnFirstError_ attribute. 
 * CAMEL-14935
 * CAMEL-18350
 * CAMEL-19894
Reporter: Mike Barlotta






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-14029) Http consumers - Returning no response should be empty body and status 204

2019-10-16 Thread Mike Barlotta (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-14029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952763#comment-16952763
 ] 

Mike Barlotta commented on CAMEL-14029:
---

I'll try to add this to the other components this week. 

> Http consumers - Returning no response should be empty body and status 204
> --
>
> Key: CAMEL-14029
> URL: https://issues.apache.org/jira/browse/CAMEL-14029
> Project: Camel
>  Issue Type: Improvement
>Reporter: Claus Ibsen
>Priority: Major
> Fix For: 3.0.0.RC3, 3.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The http components with consumers like undertow, jetty, netty etc should 
> when they return no data (eg no message body) then return empty body with 
> http status 204 (no content). As today some return a fixed hardcoded text 
> like "No body" or "No content". 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CAMEL-11231) JSON Api Dataformat

2019-03-08 Thread Mike Barlotta (JIRA)


[ 
https://issues.apache.org/jira/browse/CAMEL-11231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788300#comment-16788300
 ] 

Mike Barlotta commented on CAMEL-11231:
---

I'm looking at the existing DataFormat classes under _model/dataformat_

Examining one of them it looks like there are numerous places where they are 
plugged in
 * DataFormatClause
 * MarshalDefinition
 * UnmarshalDefinition

 * DataFormatsDefinition

 * DataFormatTransformerDefinition

I may have missed it but I also didn't see unit tests for these classes (ie 
ASN1DataFormat)
How are these typically tested?

Is there a quick start guide on building a DataFormat

 

> JSON Api Dataformat
> ---
>
> Key: CAMEL-11231
> URL: https://issues.apache.org/jira/browse/CAMEL-11231
> Project: Camel
>  Issue Type: New Feature
>Reporter: Charles Moulliard
>Priority: Major
> Fix For: 3.0.0
>
>
> Implement a new DataFormat to support to serialize/deserialize JSONApi 
> Objects/Strings as defined within the spec : [http://jsonapi.org/]
> Potential candidate projects to be evaluated :(
>  - [https://github.com/jasminb/jsonapi-converter]
>  - [https://github.com/faogustavo/JSONApi]
>  - [https://github.com/crnk-project/crnk-framework]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-11231) JSON Api Dataformat

2019-03-07 Thread Mike Barlotta (JIRA)


[ 
https://issues.apache.org/jira/browse/CAMEL-11231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787128#comment-16787128
 ] 

Mike Barlotta commented on CAMEL-11231:
---

Small PR (not related but was making life harder) for Eclipse IDE friendly POMs
[https://github.com/apache/camel/pull/2810]

Looking over the two JSON API frameworks...

> JSON Api Dataformat
> ---
>
> Key: CAMEL-11231
> URL: https://issues.apache.org/jira/browse/CAMEL-11231
> Project: Camel
>  Issue Type: New Feature
>Reporter: Charles Moulliard
>Priority: Major
> Fix For: 3.0.0
>
>
> Implement a new DataFormat to support to serialize/deserialize JSONApi 
> Objects/Strings as defined within the spec : [http://jsonapi.org/]
> Potential candidate projects to be evaluated :(
>  - [https://github.com/jasminb/jsonapi-converter]
>  - [https://github.com/faogustavo/JSONApi]
>  - [https://github.com/crnk-project/crnk-framework]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (CAMEL-11231) JSON Api Dataformat

2019-03-01 Thread Mike Barlotta (JIRA)


[ 
https://issues.apache.org/jira/browse/CAMEL-11231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782166#comment-16782166
 ] 

Mike Barlotta edited comment on CAMEL-11231 at 3/1/19 10:25 PM:


This is CodeSmell (from Gitter) 
 Quickly read through some of the documentation for JSON API

Assuming the basic goal is something like this
{code:java}
from("direct:start")
.marshal(jsonApi);
{code}
and
{code:java}
Item item = new Item("12345", "PINK PEEPS", "JUST BORN");
String jsonApiDoc = producer.requestBody("direct:start", item, 
String.class);{code}
and jsonApiDoc is something like this
{code:java}
{
"data": [
{
"id": "12345",
"type": "item",
"attributes": {
"description": "PINK PEEPS",
"supplierName": "JUST BORN"
}
}
]
}
{code}
Looking over each of the three projects listed. 
 Looks like each of them would require the POJO (in sample above the class 
Item) to have non-standard annotations 
 - [https://github.com/jasminb/jsonapi-converter]

{code:java}
import com.github.jasminb.jsonapi.annotations.Type;
import com.github.jasminb.jsonapi.annotations.Id;

@Type("item")
public class Item {
  @Id
  private String id;

...
{code}
or 
 - [https://github.com/crnk-project/crnk-framework]

 
{code:java}
import io.crnk.core.resource.annotations.JsonApiResource;
import io.crnk.core.resource.annotations.JsonApiId;

@JsonApiResource(type = "item")
public class Item {
  @JsonApiId
  private String id;

...{code}
or 
 - [https://github.com/faogustavo/JSONApi]

 
{code:java}
import com.gustavofao.jsonapi.Annotatios.Type;
import com.gustavofao.jsonapi.Models.Resource;

@Type("item")
public class Item extends Resource {
 {code}
Of these I would not recommend 
[faogustavo/JSONApi|https://github.com/faogustavo/JSONApi] since that requires 
extending _Resource._ Would rather not force users of Camel to have to use a 
specific class hierarchy to use JSON API. I would need to look further at the 
other two libraries. 

Ideally there would be a standardized set of annotations :)


was (Author: g1antfan):
This is CodeSmell (from Gitter) 
Quickly read through some of the documentation for JSON API

Assuming the basic goal is something like this
{code:java}
from("direct:start")
.marshal(jsonApi);
{code}
and
{code:java}
Item item = new Item("12345", "PINK PEEPS", "JUST BORN");
String jsonApiDoc = producer.requestBody("direct:start", item, 
String.class);{code}
and jsonApiDoc is something like this
{code:java}
{
"data": [
{
"id": "12345",
"type": "item",
"attributes": {
"description": "PINK PEEPS",
"supplierName": "JUST BORN"
}
}
]
}
{code}
Looking over each of the three projects listed. 
Looks like each of them would require the POJO (in sample above the class Item) 
to have non-standard annotations 


 - [https://github.com/jasminb/jsonapi-converter]

{code:java}
import com.github.jasminb.jsonapi.annotations.Type;
import com.github.jasminb.jsonapi.annotations.Id;

@Type("item")
public class Item {
  @Id
  private String id;

...
{code}
or 
 - [https://github.com/crnk-project/crnk-framework]

 
{code:java}
import io.crnk.core.resource.annotations.JsonApiResource;
import io.crnk.core.resource.annotations.JsonApiId;

@JsonApiResource(type = "item")
public class Item {
  @JsonApiId
  private String id;

...{code}
or 
 - [https://github.com/faogustavo/JSONApi]

 
{code:java}
import com.gustavofao.jsonapi.Annotatios.Type;
import com.gustavofao.jsonapi.Models.Resource;

@Type("item")
public class Item extends Resource {
{code}
 

 

 

 

Of these I would not recommend 
[faogustavo/JSONApi|https://github.com/faogustavo/JSONApi] since that requires 
extending _Resource._ Would rather not force users of Camel to have to use a 
specific class hierarchy to use JSON API. I would need to look further at the 
other two libraries. 

Ideally there would be a standardized set of annotations :)

> JSON Api Dataformat
> ---
>
> Key: CAMEL-11231
> URL: https://issues.apache.org/jira/browse/CAMEL-11231
> Project: Camel
>  Issue Type: New Feature
>Reporter: Charles Moulliard
>Priority: Major
> Fix For: 3.0.0
>
>
> Implement a new DataFormat to support to serialize/deserialize JSONApi 
> Objects/Strings as defined within the spec : [http://jsonapi.org/]
> Potential candidate projects to be evaluated :(
>  - [https://github.com/jasminb/jsonapi-converter]
>  - [https://github.com/faogustavo/JSONApi]
>  - [https://github.com/crnk-project/crnk-framework]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-11231) JSON Api Dataformat

2019-03-01 Thread Mike Barlotta (JIRA)


[ 
https://issues.apache.org/jira/browse/CAMEL-11231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782166#comment-16782166
 ] 

Mike Barlotta commented on CAMEL-11231:
---

This is CodeSmell (from Gitter) 
Quickly read through some of the documentation for JSON API

Assuming the basic goal is something like this
{code:java}
from("direct:start")
.marshal(jsonApi);
{code}
and
{code:java}
Item item = new Item("12345", "PINK PEEPS", "JUST BORN");
String jsonApiDoc = producer.requestBody("direct:start", item, 
String.class);{code}
and jsonApiDoc is something like this
{code:java}
{
"data": [
{
"id": "12345",
"type": "item",
"attributes": {
"description": "PINK PEEPS",
"supplierName": "JUST BORN"
}
}
]
}
{code}
Looking over each of the three projects listed. 
Looks like each of them would require the POJO (in sample above the class Item) 
to have non-standard annotations 


 - [https://github.com/jasminb/jsonapi-converter]

{code:java}
import com.github.jasminb.jsonapi.annotations.Type;
import com.github.jasminb.jsonapi.annotations.Id;

@Type("item")
public class Item {
  @Id
  private String id;

...
{code}
or 
 - [https://github.com/crnk-project/crnk-framework]

 
{code:java}
import io.crnk.core.resource.annotations.JsonApiResource;
import io.crnk.core.resource.annotations.JsonApiId;

@JsonApiResource(type = "item")
public class Item {
  @JsonApiId
  private String id;

...{code}
or 
 - [https://github.com/faogustavo/JSONApi]

 
{code:java}
import com.gustavofao.jsonapi.Annotatios.Type;
import com.gustavofao.jsonapi.Models.Resource;

@Type("item")
public class Item extends Resource {
{code}
 

 

 

 

Of these I would not recommend 
[faogustavo/JSONApi|https://github.com/faogustavo/JSONApi] since that requires 
extending _Resource._ Would rather not force users of Camel to have to use a 
specific class hierarchy to use JSON API. I would need to look further at the 
other two libraries. 

Ideally there would be a standardized set of annotations :)

> JSON Api Dataformat
> ---
>
> Key: CAMEL-11231
> URL: https://issues.apache.org/jira/browse/CAMEL-11231
> Project: Camel
>  Issue Type: New Feature
>Reporter: Charles Moulliard
>Priority: Major
> Fix For: 3.0.0
>
>
> Implement a new DataFormat to support to serialize/deserialize JSONApi 
> Objects/Strings as defined within the spec : [http://jsonapi.org/]
> Potential candidate projects to be evaluated :(
>  - [https://github.com/jasminb/jsonapi-converter]
>  - [https://github.com/faogustavo/JSONApi]
>  - [https://github.com/crnk-project/crnk-framework]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-10540) GROK Parser

2018-02-01 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349525#comment-16349525
 ] 

Mike Barlotta commented on CAMEL-10540:
---

after looking around seems there are two Java Grok libraries
 * [https://github.com/thekrakken/java-grok]
 * [https://github.com/aicer/grok]

It looks like Apache Metron refers to thekrakken
 * 
[https://github.com/apache/metron/tree/c4954e8af7d5cab59ec6fdc4d9a0bb07c794afd6/metron-streaming/Metron-MessageParsers]
 * 
[https://github.com/apache/metron/blob/master/metron-platform/metron-parsers/src/main/java/org/apache/metron/parsers/GrokParser.java#L34-L35]

so I that Camel component should use this as well

thoughts?

> GROK Parser
> ---
>
> Key: CAMEL-10540
> URL: https://issues.apache.org/jira/browse/CAMEL-10540
> Project: Camel
>  Issue Type: New Feature
>Reporter: Jan Bernhardt
>Priority: Major
> Fix For: Future
>
>
> As discussed on the mailing list [1], it would be great to have a grok filter 
> for camel, to parse text with multiple named regex expressions resulting in a 
> Map containing the named key as well as the parsed value.
> [1] 
> http://camel.465427.n5.nabble.com/Parsing-unstructured-Text-in-Camel-td5790513.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-10540) GROK Parser

2018-02-01 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349448#comment-16349448
 ] 

Mike Barlotta commented on CAMEL-10540:
---

sorry got side tracked... will pick it back up

> GROK Parser
> ---
>
> Key: CAMEL-10540
> URL: https://issues.apache.org/jira/browse/CAMEL-10540
> Project: Camel
>  Issue Type: New Feature
>Reporter: Jan Bernhardt
>Priority: Major
> Fix For: Future
>
>
> As discussed on the mailing list [1], it would be great to have a grok filter 
> for camel, to parse text with multiple named regex expressions resulting in a 
> Map containing the named key as well as the parsed value.
> [1] 
> http://camel.465427.n5.nabble.com/Parsing-unstructured-Text-in-Camel-td5790513.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2018-01-21 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333595#comment-16333595
 ] 

Mike Barlotta commented on CAMEL-10719:
---

Answered here: 
[https://github.com/apache/camel/commit/4f65a942465d82acea52a5012c00bec81d1183e6#commitcomment-27002609]

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>Assignee: Claus Ibsen
>Priority: Major
> Fix For: 2.19.0
>
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-10718) Route Policy implements circuit breaker pattern to stop consuming from the endpoint

2018-01-21 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333594#comment-16333594
 ] 

Mike Barlotta commented on CAMEL-10718:
---

Answered here: 
https://github.com/apache/camel/commit/4f65a942465d82acea52a5012c00bec81d1183e6#commitcomment-27002609

> Route Policy implements circuit breaker pattern to stop consuming from the 
> endpoint
> ---
>
> Key: CAMEL-10718
> URL: https://issues.apache.org/jira/browse/CAMEL-10718
> Project: Camel
>  Issue Type: New Feature
>  Components: camel-core
>Reporter: Mike Barlotta
>Assignee: Claus Ibsen
>Priority: Major
> Fix For: 2.19.0
>
>
> Our project recently needed a circuit breaker that stop consuming messages 
> from the from endpoint. I noticed that the Camel circuit breakers consumed 
> from the endpoint even in the open mode and controlled access to the to 
> endpoints on the route.
> Based on a Stack Overflow answer, I created a circuit breaker that will stop 
> consuming from the starting endpoint based on exceptions being thrown. It is 
> using a RoutePolicy and imitates the existing ThrottlingInflightRoutePolicy 
> as well as the CircuitBreakingLoadBalancer.
> This is in the PR 1400
> https://github.com/apache/camel/pull/1400



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-19 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332493#comment-16332493
 ] 

Mike Barlotta commented on CAMEL-12133:
---

Thanks [~davsclaus]
Updated the RoutePolicy wiki page with a section for the 
ThrottlingExceptionPolicy

> Update Camel documentation for ThrottlingExceptionRoutePolicy 
> --
>
> Key: CAMEL-12133
> URL: https://issues.apache.org/jira/browse/CAMEL-12133
> Project: Camel
>  Issue Type: Task
>  Components: documentation
>Reporter: Mike Barlotta
>Priority: Minor
>
> The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
> Perhaps on the Route Policy page but open to suggestions
> http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-18 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16330616#comment-16330616
 ] 

Mike Barlotta commented on CAMEL-12133:
---

Created account (codesmell) on ASF Wiki

> Update Camel documentation for ThrottlingExceptionRoutePolicy 
> --
>
> Key: CAMEL-12133
> URL: https://issues.apache.org/jira/browse/CAMEL-12133
> Project: Camel
>  Issue Type: Task
>  Components: documentation
>Reporter: Mike Barlotta
>Priority: Minor
>
> The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
> Perhaps on the Route Policy page but open to suggestions
> http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-18 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16330617#comment-16330617
 ] 

Mike Barlotta commented on CAMEL-12133:
---

Created account (codesmell) on ASF Wiki

> Update Camel documentation for ThrottlingExceptionRoutePolicy 
> --
>
> Key: CAMEL-12133
> URL: https://issues.apache.org/jira/browse/CAMEL-12133
> Project: Camel
>  Issue Type: Task
>  Components: documentation
>Reporter: Mike Barlotta
>Priority: Minor
>
> The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
> Perhaps on the Route Policy page but open to suggestions
> http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-15 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16326507#comment-16326507
 ] 

Mike Barlotta commented on CAMEL-12133:
---

submitted the ICLA

> Update Camel documentation for ThrottlingExceptionRoutePolicy 
> --
>
> Key: CAMEL-12133
> URL: https://issues.apache.org/jira/browse/CAMEL-12133
> Project: Camel
>  Issue Type: Task
>  Components: documentation
>Reporter: Mike Barlotta
>Priority: Minor
>
> The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
> Perhaps on the Route Policy page but open to suggestions
> http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-10 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320182#comment-16320182
 ] 

Mike Barlotta commented on CAMEL-12133:
---

This was to document the circuit breaker that stops consuming from an endpoint, 
not the one that is deprecated by hystrix. 

What's the best way to get started on writing up docs? 

> Update Camel documentation for ThrottlingExceptionRoutePolicy 
> --
>
> Key: CAMEL-12133
> URL: https://issues.apache.org/jira/browse/CAMEL-12133
> Project: Camel
>  Issue Type: Task
>  Components: documentation
>Reporter: Mike Barlotta
>Priority: Minor
>
> The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
> Perhaps on the Route Policy page but open to suggestions
> http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CAMEL-12133) Update Camel documentation for ThrottlingExceptionRoutePolicy

2018-01-09 Thread Mike Barlotta (JIRA)
Mike Barlotta created CAMEL-12133:
-

 Summary: Update Camel documentation for 
ThrottlingExceptionRoutePolicy 
 Key: CAMEL-12133
 URL: https://issues.apache.org/jira/browse/CAMEL-12133
 Project: Camel
  Issue Type: Bug
  Components: camel-core
Affects Versions: 2.19.0
Reporter: Mike Barlotta
Priority: Minor


The `ThrottlingExceptionPolicy` circuit breaker EIP needs some documentation 
Perhaps on the Route Policy page but open to suggestions
http://camel.apache.org/routepolicy.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CAMEL-12125) Add keepOpen to the ThrottlingExceptionRoutePolicy circuit breaker

2018-01-04 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16312179#comment-16312179
 ] 

Mike Barlotta commented on CAMEL-12125:
---

PR for possible implementation 
https://github.com/apache/camel/pull/2165

> Add keepOpen to the ThrottlingExceptionRoutePolicy circuit breaker
> --
>
> Key: CAMEL-12125
> URL: https://issues.apache.org/jira/browse/CAMEL-12125
> Project: Camel
>  Issue Type: Bug
>  Components: camel-core
>Affects Versions: 2.19.0
>Reporter: Mike Barlotta
>Priority: Minor
>
> a useful addition to the endpoint circuit breaker (see CAMEL-10718) would be 
> the ability to force it into the open state so that it suspends consuming 
> even if there are no exceptions. 
> this would function similar to the Netflix Hystrix forceOpen
> https://github.com/Netflix/Hystrix/wiki/Configuration#circuitBreaker.forceOpen
> Willing to submit a PR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CAMEL-12125) Add keepOpen to the ThrottlingExceptionRoutePolicy circuit breaker

2018-01-04 Thread Mike Barlotta (JIRA)
Mike Barlotta created CAMEL-12125:
-

 Summary: Add keepOpen to the ThrottlingExceptionRoutePolicy 
circuit breaker
 Key: CAMEL-12125
 URL: https://issues.apache.org/jira/browse/CAMEL-12125
 Project: Camel
  Issue Type: Bug
  Components: camel-core
Affects Versions: 2.19.0
Reporter: Mike Barlotta
Priority: Minor


a useful addition to the endpoint circuit breaker (see CAMEL-10718) would be 
the ability to force it into the open state so that it suspends consuming even 
if there are no exceptions. 

this would function similar to the Netflix Hystrix forceOpen

https://github.com/Netflix/Hystrix/wiki/Configuration#circuitBreaker.forceOpen

Willing to submit a PR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CAMEL-10540) GROK Parser

2017-08-15 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127856#comment-16127856
 ] 

Mike Barlotta commented on CAMEL-10540:
---

Not sure if anyone has taken a crack at this component, but would be willing to 
look into it.

But wanted to also clarify some things
* assume that this would be a new component under the camel/components in github
* solution should seek to leverage elastic search grok capabilities 

At initial glance seems this is a candidate for the pluggable Camel Data 
Formats. 

{code:java}
from(direct:in)
  .marshal()
  .grok("%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes}")
  .to(mock:result);
{code}

Is that what is intended?

I assume (similar to the Grok filter plugin on Elastic Search) that the 
contents of Exchange.getIn().getBody() would be evaluated such that 

{noformat}
55.3.244.1 GET /index.html 15824 0.043
{noformat}

passing through the Grok Filter
{noformat}
%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} 
{noformat}

would yield a Map in the Exchange body as follows

||Key||Value||
|client|55.3.244.1|
|method|GET|
|request|/index/html|
|bytes|15824|

Just let me know

> GROK Parser
> ---
>
> Key: CAMEL-10540
> URL: https://issues.apache.org/jira/browse/CAMEL-10540
> Project: Camel
>  Issue Type: New Feature
>Reporter: Jan Bernhardt
>
> As discussed on the mailing list [1], it would be great to have a grok filter 
> for camel, to parse text with multiple named regex expressions resulting in a 
> Map containing the named key as well as the parsed value.
> [1] 
> http://camel.465427.n5.nabble.com/Parsing-unstructured-Text-in-Camel-td5790513.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826742#comment-15826742
 ] 

Mike Barlotta edited comment on CAMEL-10719 at 1/17/17 10:34 PM:
-

Had a question re: logging and changing the logging levels via JMX management. 
The ThrottlingInflightRoutePolicy and corresponding management classes allow 
the logging level to be changed. 
This is affecting the CamelLogger on ThrottlingInflightRoutePolicy. However, 
the ThrottlingInflightRoutePolicy is logging using a logger defined in 
RoutePolicySupport. Also looking around the code it looks like other managed 
classes do not allow changing of the log level. 

Was going to move forward and not add the ability to change the log level but 
wanted to check first.


was (Author: mbarlotta):
Had a question re: logging and changing the logging levels via JMX management. 
The ThrottlingInflightRoutePolicy and corresponding management classes allow 
the logging level to be changed. 
This is affecting the CamelLogger on ThrottlingInflightRoutePolicy. However, 
the RoutePolicy is logging using a logger defined in RoutePolicySupport. Also 
looking around the code it looks like other managed classes do not allow 
changing of the log level. 

Was going to move forward and not add the ability to change the log level but 
wanted to check first.

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-10719:
--
Comment: was deleted

(was: Current PR 
https://github.com/apache/camel/pull/1404)

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826852#comment-15826852
 ] 

Mike Barlotta commented on CAMEL-10719:
---

Current PR 
https://github.com/apache/camel/pull/1404

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826758#comment-15826758
 ] 

Mike Barlotta commented on CAMEL-10719:
---

Another question:
Do we want to be able to force close or force open the circuit from JMX?

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)

[ 
https://issues.apache.org/jira/browse/CAMEL-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826742#comment-15826742
 ] 

Mike Barlotta commented on CAMEL-10719:
---

Had a question re: logging and changing the logging levels via JMX management. 
The ThrottlingInflightRoutePolicy and corresponding management classes allow 
the logging level to be changed. 
This is affecting the CamelLogger on ThrottlingInflightRoutePolicy. However, 
the RoutePolicy is logging using a logger defined in RoutePolicySupport. Also 
looking around the code it looks like other managed classes do not allow 
changing of the log level. 

Was going to move forward and not add the ability to change the log level but 
wanted to check first.

> Add ability to manage ThrottlingExceptionRoutePolicy through JMX
> 
>
> Key: CAMEL-10719
> URL: https://issues.apache.org/jira/browse/CAMEL-10719
> Project: Camel
>  Issue Type: New Feature
>Reporter: Mike Barlotta
>
> add management via JMX to ThrottlingExceptionRoutePolicy route policy.
> See how we do it for the existing
> org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CAMEL-10718) Route Policy implements circuit breaker pattern to stop consuming from the endpoint

2017-01-17 Thread Mike Barlotta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CAMEL-10718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Barlotta updated CAMEL-10718:
--
Description: 
Our project recently needed a circuit breaker that stop consuming messages from 
the from endpoint. I noticed that the Camel circuit breakers consumed from the 
endpoint even in the open mode and controlled access to the to endpoints on the 
route.

Based on a Stack Overflow answer, I created a circuit breaker that will stop 
consuming from the starting endpoint based on exceptions being thrown. It is 
using a RoutePolicy and imitates the existing ThrottlingInflightRoutePolicy as 
well as the CircuitBreakingLoadBalancer.

This is in the PR 1400
https://github.com/apache/camel/pull/1400

  was:
Our project recently needed a circuit breaker that stop consuming messages from 
the from endpoint. I noticed that the Camel circuit breakers consumed from the 
endpoint even in the open mode and controlled access to the to endpoints on the 
route.

Based on this Stack Overflow answer, I created a circuit breaker that will stop 
consuming from the starting endpoint based on exceptions being thrown. It is 
using a RoutePolicy and imitates the existing ThrottlingInflightRoutePolicy as 
well as the CircuitBreakingLoadBalancer.

This is in the PR 1400
https://github.com/apache/camel/pull/1400


> Route Policy implements circuit breaker pattern to stop consuming from the 
> endpoint
> ---
>
> Key: CAMEL-10718
> URL: https://issues.apache.org/jira/browse/CAMEL-10718
> Project: Camel
>  Issue Type: New Feature
>  Components: camel-core
>Reporter: Mike Barlotta
>
> Our project recently needed a circuit breaker that stop consuming messages 
> from the from endpoint. I noticed that the Camel circuit breakers consumed 
> from the endpoint even in the open mode and controlled access to the to 
> endpoints on the route.
> Based on a Stack Overflow answer, I created a circuit breaker that will stop 
> consuming from the starting endpoint based on exceptions being thrown. It is 
> using a RoutePolicy and imitates the existing ThrottlingInflightRoutePolicy 
> as well as the CircuitBreakingLoadBalancer.
> This is in the PR 1400
> https://github.com/apache/camel/pull/1400



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CAMEL-10719) Add ability to manage ThrottlingExceptionRoutePolicy through JMX

2017-01-17 Thread Mike Barlotta (JIRA)
Mike Barlotta created CAMEL-10719:
-

 Summary: Add ability to manage ThrottlingExceptionRoutePolicy 
through JMX
 Key: CAMEL-10719
 URL: https://issues.apache.org/jira/browse/CAMEL-10719
 Project: Camel
  Issue Type: New Feature
Reporter: Mike Barlotta


add management via JMX to ThrottlingExceptionRoutePolicy route policy.

See how we do it for the existing
org.apache.camel.api.management.mbean.ManagedThrottlingInflightRoutePolicyMBean




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CAMEL-10718) Route Policy implements circuit breaker pattern to stop consuming from the endpoint

2017-01-17 Thread Mike Barlotta (JIRA)
Mike Barlotta created CAMEL-10718:
-

 Summary: Route Policy implements circuit breaker pattern to stop 
consuming from the endpoint
 Key: CAMEL-10718
 URL: https://issues.apache.org/jira/browse/CAMEL-10718
 Project: Camel
  Issue Type: New Feature
  Components: camel-core
Reporter: Mike Barlotta


Our project recently needed a circuit breaker that stop consuming messages from 
the from endpoint. I noticed that the Camel circuit breakers consumed from the 
endpoint even in the open mode and controlled access to the to endpoints on the 
route.

Based on this Stack Overflow answer, I created a circuit breaker that will stop 
consuming from the starting endpoint based on exceptions being thrown. It is 
using a RoutePolicy and imitates the existing ThrottlingInflightRoutePolicy as 
well as the CircuitBreakingLoadBalancer.

This is in the PR 1400
https://github.com/apache/camel/pull/1400



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)