Re: Transactional markers are not deleted from log segments when policy is compact

2019-05-31 Thread Pranavi Chandramohan
Thanks Jonathan! That should help. On Fri, 31 May 2019, 6:44 pm Jonathan Santilli, wrote: > Hello Pranavi, > > it sounds like this was solved in the release candidate 2.2.1-RC1 ( > https://issues.apache.org/jira/browse/KAFKA-8335) > Take a look at it. > > Hope that helps. > > > Cheers! > -- > Jo

Re: Transaction support multiple producer instance

2019-05-31 Thread Guozhang Wang
Hello Wenxuan, One KIP that we are considering so far is KIP-447: https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics It does not directly address your scenarios, but I'm wondering if you can adjust your code to group the producers if they

Re: [VOTE] 2.2.1 RC1

2019-05-31 Thread Mickael Maison
+1 non-binding We've been running it for a few days in a few clusters, so far no issues. I also ran unit tests and checked signatures. Thanks Vahid for running this release On Fri, May 31, 2019 at 9:01 AM Andrew Schofield wrote: > > +1 (non-binding) > > Built and ran source and sink connectors.

Re: Transaction support multiple producer instance

2019-05-31 Thread wenxuan
Hi Sandeep, Thanks for your replay. I have split the large message, but the problem is the split message can’t handled in one JVM or physical machine, even by multi-thread producer, cause CPU or other resource bottleneck. So I need make multiple producer instances in different physical machine

Re: Transaction support multiple producer instance

2019-05-31 Thread Sandeep Nemuri
How about splitting the large message and then produce? On Fri, May 31, 2019 at 3:39 PM wenxuan wrote: > Hi Jonathan, > > Thanks for your reply. > > I get mass message to send beyond the limit of one JVM or physical > machine, so I need make more than one producer in the same transaction. > > Si

Re: Transaction support multiple producer instance

2019-05-31 Thread wenxuan
Hi Jonathan, Thanks for your reply. I get mass message to send beyond the limit of one JVM or physical machine, so I need make more than one producer in the same transaction. Since multiple producer can’t share the same transaction id, Is there way to achieve multiple producer transaction desc

Re: Transactional markers are not deleted from log segments when policy is compact

2019-05-31 Thread Jonathan Santilli
Hello Pranavi, it sounds like this was solved in the release candidate 2.2.1-RC1 ( https://issues.apache.org/jira/browse/KAFKA-8335) Take a look at it. Hope that helps. Cheers! -- Jonathan On Fri, May 31, 2019 at 8:59 AM Pranavi Chandramohan < prana...@campaignmonitor.com> wrote: > Hi all,

Re: Transaction support multiple producer instance

2019-05-31 Thread Jonathan Santilli
Hello Wenxuan, there reason of the Exception, by design the transaction Id must be unique per producer instance, this is from the Java docs: https://kafka.apache.org/20/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html "The purpose of the transactional.id is to enable transaction recov

Re: [VOTE] 2.2.1 RC1

2019-05-31 Thread Andrew Schofield
+1 (non-binding) Built and ran source and sink connectors. Andrew Schofield - IBM On 31/05/2019, 08:55, "Viktor Somogyi-Vass" wrote: +1 (non-binding) 1. Ran unit tests 2. Ran some basic automatic end-to-end tests over plaintext and SSL too 3. Ran systests sanity checks

Transactional markers are not deleted from log segments when policy is compact

2019-05-31 Thread Pranavi Chandramohan
Hi all, We use Kafka version 2.11-1.1.1. We produce and consume transactional messages and recently we noticed that 2 partitions of the __consumer_offset topic have very high disk usage (256GB) When we looked at the log segments for these 2 partitions, there were files that were 6 months old. By d

Re: [VOTE] 2.2.1 RC1

2019-05-31 Thread Viktor Somogyi-Vass
+1 (non-binding) 1. Ran unit tests 2. Ran some basic automatic end-to-end tests over plaintext and SSL too 3. Ran systests sanity checks Viktor On Thu, May 23, 2019 at 6:04 PM Harsha wrote: > +1 (binding) > > 1. Ran unit tests > 2. System tests > 3. 3 node cluster with few manual tests. > > Th