Unable to launch KRaft

2023-07-28 Thread adrien ruffie
dear all, I would like to try KRaft in but I didn't arrived to finish the installation bellow:

Re: Kafka delaying message

2019-05-23 Thread Adrien Ruffie
sing until it has caught up to your > > desired > > >> delay. > > >> > > >> This is a simplified scenario that may or may not map to your > production > > >> use case, though. > > >> > > >> — > > >> Peter >

Kafka delaying message

2019-05-22 Thread Adrien Ruffie
Hello all, I have a specific need and I don't know if a generic solution exist ... maybe you can enlighten me I need to delay each sended message about 15 mins. Example Message with offset 1 created at 2:41PM by the producer and received by the consumer at 2:56PM Message with offset 2 created at

Fwd: leader none, with only one replicat end no ISR

2019-03-30 Thread Adrien Ruffie
Hi all, I would like to know if nobody is able to answer about several questions that I asked for a few months, or if I was just banished from the mailing list ... thank a lot & best regards, Adrien -- Forwarded message - De : Adrien Ruffie Date: jeu. 28 mars 2019 à 1

leader none, with only one replicat end no ISR

2019-03-28 Thread Adrien Ruffie
Hello all, since yesterday several of my topics have the following description: ./kafka-topics.sh --zookeeper ctl1.edeal.online:2181 --describe | grep -P "none" !2032 Topic: edeal_cell_dev Partition: 0 Leader: none Replicas: 5 Isr: Topic:

TR: reseting dirty offset

2019-03-10 Thread adrien ruffie
Anyone have a idea ? very strange I though you'd already come across the case... De : adrien ruffie Envoyé : vendredi 8 mars 2019 10:49 À : users@kafka.apache.org Objet : reseting dirty offset Hello all, This morning I got the following error: [2019-03

reseting dirty offset

2019-03-08 Thread adrien ruffie
Hello all, This morning I got the following error: [2019-03-08 09:43:03,066] WARN Resetting first dirty offset to log start offset 1 since the checkpointed offset 0 is invalid. (kafka.log.LogCleanerManager$) If I correctly understand, the log cleaner try to

UNKNOW_TOPIC_OR_PARTITION problem

2019-03-07 Thread adrien ruffie
Hello all, I got the following exception in my Kafka consumer: 2019-03-06 10:53:34,416 WARN [LogContext.java:246] [Consumer clientId=ContractComputerConsumer_prod-8, groupId=consumer_2] Error while fetching metadata with correlation id 22904 : {topic1_prod=UNKNOWN_TOPIC_OR_PARTITION} I

RE: Kafka Connect: Increase Consumer Consumption

2018-07-20 Thread adrien ruffie
its consumption before I increase the number of consumers. Yeah I know it's weird, I thought I would see some changes in data consumption after tweaking these parameters. On 7/18/18, 11:48 PM, "adrien ruffie" wrote: Strange enough ... I don't really understand why. D

RE: Kafka Connect: Increase Consumer Consumption

2018-07-19 Thread adrien ruffie
, Vishnu On 7/18/18, 1:48 PM, "adrien ruffie" wrote: Hi Vishnu, do you have check your fetch.max.wait.ms value ? it may not be long enough time to wait until you recover your 5000 records ... maybe just enough time to recover only 1150 records. fetch.m

RE: Kafka Connect: Increase Consumer Consumption

2018-07-18 Thread adrien ruffie
Hi Vishnu, do you have check your fetch.max.wait.ms value ? it may not be long enough time to wait until you recover your 5000 records ... maybe just enough time to recover only 1150 records. fetch.max.wait.ms By setting fetch.min.bytes, you tell Kafka to wait until it has enough data to

zookeeper as systemctl

2018-07-15 Thread Adrien Ruffie
Hello Kafka's users, without any response of Zooketreper's users, I* am **relyin**g on you...* I have 2 questions for you. what is the real difference between these 2 following commands ? (I don't find any documentation) zkServer.sh start-foreground and zkServer.sh start My second

RE: Monitoring Kafka

2018-07-10 Thread adrien ruffie
Kafka Hi Adrien, Take a look at this post that I wrote. Maybe can guide you. Enjoy, https://medium.com/@danielmrosa/monitoring-kafka-b97d2d5a5434 2018-07-09 12:09 GMT-03:00 adrien ruffie : > Great ! Thank a lot Daniel ! > > I will try it. > > Best Reg

RE: Monitoring Kafka

2018-07-09 Thread adrien ruffie
-demo Alternatively, Confluent provide a very powerful Control Center for monitoring and managing Kafka (disclaimer, I work for Confluent!) Best Regards Dan On Mon, Jul 9, 2018 at 2:12 AM, Adrien Ruffie wrote: > Hello Kafka Users, > > I want to monitor our Kafka cluster correctly. I

Monitoring Kafka

2018-07-09 Thread Adrien Ruffie
Hello Kafka Users, I want to monitor our Kafka cluster correctly. I have read several articles on "how to monitor Kafka" but I have the impression that every company is doing a bit of a thing (rearranging them in his own way). What the really thing I need to monitor, verify and set notifications

Offset reprocess loop over

2018-07-04 Thread Adrien Ruffie
Hello all, we have in our infrastructure, 3 brokers. Sometimes we order a "reprocess all" certain flow, but we are facing a problem ... After relaunch reprocessing at the beginning offset, and arrived at a number offset, it loops several times by returning to an previous offset. for example,

Offset reprocess loop

2018-07-03 Thread adrien ruffie
Hello all, we have in our infrastructure, 3 brokers. Sometimes we order a "reprocess all" certain flow, but we are facing a problem ... After relaunch reprocessing at the beginning offset, and arrived at a number offset, it loops several times by returning to an previous offset. for

RE: How do I specify jdbc connection porperites in kafka-jdbc-connector

2018-04-25 Thread adrien ruffie
Hi Niels, for using Kafka Connect, you must to define your connection configuration file under kafka_2.11-1.0.0/config (2.11-1.0.0 is just a example of kafka version). In config directory you can create a connector file like "mysql-connect.properties" and specify required parameters into

RE: Is KTable cleaned up automatically in a Kafka streams application?

2018-04-19 Thread adrien ruffie
Hi Mihaela, by default a KTable already have a log compacted behavior. therefore you don't need to manually clean up. Best regards, Adrien De : Mihaela Stoycheva Envoyé : jeudi 19 avril 2018 13:41:22 À : users@kafka.apache.org

RE: Default kafka log.dir /tmp | tmp-file-cleaner process

2018-04-18 Thread adrien ruffie
Hi Marc, I think it's depends rather on "log.dirs" parameter. Because this parameter would prefer to use in more convenient case, "log.dir" parameter is secondary. logs.dirs: The directories in which the log data is kept. If not set, the value in log.dir is used log.dir: The directory in

RE: join 2 topic streams --> to another topic

2018-04-09 Thread adrien ruffie
you read a topic either as stream or table though. -Matthias On 4/9/18 12:16 AM, adrien ruffie wrote: > Hello Matthias, > > thank for your response. I will try to read the blog post today. > > > For the keys, not really, In fact, keys are always the same "symbol"

RE: join 2 topic streams --> to another topic

2018-04-09 Thread adrien ruffie
er -- it depends on the context of your application. Are keys unique? Do you want to get exactly one result or should a single stock join with multiple dividends? Do you want Stock and Dividend join depending the their timestamps? -Matthias On 4/8/18 1:34 PM, adrien ruffie wrote: > Hello all, >

join 2 topic streams --> to another topic

2018-04-08 Thread adrien ruffie
Hello all, I have 2 topics streamed by KStream and one KStream I want to merge both object's informations (Stock & Dividend) and send to another topic with for example The key of 2 two topic is the same. I need to use, leftJoin,

RE: Kafka topic retention

2018-04-04 Thread adrien ruffie
Hi Tony, the "log.retention.hours" property is only for topic retention period, this does not affect the topic's deletion. Your data is correctly deleted but, the only way to remove a topic is to use "./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic testTopic" After that

RE: Kafka bootstrap broker disconnected

2018-03-23 Thread adrien ruffie
Hi, <+911727124597>:2181 disconnected means that your not is disconnected from your zookeeper node. Check if your zookeeper is alive, if you kafka node can reach zookeeper and vice-versa. But your zookeeper's ip <+911727124597> is very strange ... best regards, Adrien

RE: kafka offset replication factor - Any suggestion will be appreciated.

2018-03-22 Thread adrien ruffie
c test Topic:test PartitionCount:52 ReplicationFactor:1 Configs: -Original Message- From: adrien ruffie [mailto:adriennolar...@hotmail.fr] Sent: Thursday, March 22, 2018 2:28 AM To: users@kafka.apache.org Subject: RE: kafka offset replication factor - Any suggestion will be app

RE: kafka offset replication factor - Any suggestion will be appreciated.

2018-03-22 Thread adrien ruffie
Hi, have you check if you have already not created a old topic with this replication factor '3' ? You can check it by listing with the command kafka-topcis.sh -- zookeeper ip:port --describe/list Best regards, Adrien De : Anand, Uttam

RE: Suggestion over architecture

2018-03-10 Thread adrien ruffie
ture Yes, but I misread his reply and thought that he meant the "kafka rest proxy". But now I see that we say the same thing - sorry for the confusion. The normal way to do the authentication and authorization would be in the rest/grpc endpoint before sending it to kafka. 2018-03-10 19:

RE: Suggestion over architecture

2018-03-10 Thread adrien ruffie
meant the "kafka rest proxy". But now I see that we say the same thing - sorry for the confusion. The normal way to do the authentication and authorization would be in the rest/grpc endpoint before sending it to kafka. 2018-03-10 19:39 GMT+01:00 adrien ruffie <adriennolar...@hotmail

RE: Suggestion over architecture

2018-03-10 Thread adrien ruffie
it will in turn feed the Kafka topic. > > You will minimize coupling and be able to scale / upgrade easier. > > On Mar 10, 2018 2:47 AM, "adrien ruffie" <adriennolar...@hotmail.fr> > wrote: > > > Hello all, > > > > > > in my company we plan to se

Suggestion over architecture

2018-03-09 Thread adrien ruffie
Hello all, in my company we plan to set up the following architecture for our client: An internal kafka cluster in our company, and deploy a webapp (our software solution) on premise for our clients. We think to create one producer by "webapp" client in order to push in a global topic (in

RE: Delayed processing

2018-03-08 Thread adrien ruffie
arrive in absolute logical order. If I understand your explanation correctly, you are saying that with your setup, Kafka guarantees the processing in order of ingestion of the messages. Correct? Thanks! -wim On Thu, 8 Mar 2018 at 22:58 adrien ruffie <adriennolar...@hotmail.fr> wrote:

RE: Delayed processing

2018-03-08 Thread adrien ruffie
Hello Wim, does it matter (I think), because one of the big and principal features of Kafka is: Kafka is to do load balancing of messages and guarantee ordering in a distributed cluster. The order of the messages should be guaranteed, unless several cases: 1] Producer can cause data loss

RE: difference between 2 options

2018-03-05 Thread adrien ruffie
I hope this answers your questions. Best regards, Andras On Thu, Mar 1, 2018 at 2:59 AM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > Sorry Andras, the the delay of my response. > > > Ok I correctly understood for the deletion thank to your explanation. > > > how

RE: Mirror Maker Errors

2018-03-04 Thread adrien ruffie
Hi Oleg, do you have configured your consumer/producer with "no data loss" configuration like bellow ? For Consumer, set auto.commit.enabled=false in consumer.properties For Producer 1. max.in.flight.requests.per.connection=1 2. retries=Int.MaxValue 3. acks=-1 4.

Choosing topic/partition formula

2018-03-02 Thread adrien ruffie
Hi all, I have a difficulty to represent an example of the calculation of the following formula. Based on throughput requirements one can pick a rough number of partitions. 1. Lets call the throughput from producer to a single partition is P 2. Throughput from a single partition to a

RE: Hardware Guidance

2018-03-01 Thread adrien ruffie
small Kafka cluster on AWS' > m4.xlarge instances in the past with no issues (low number of terabytes > stored in total, low single-digit thousands of messages produced per second > in peak) - I actually think it was oversized for that use case. > > On 1 March 2018 at 17:09, adrien ruffie

Hardware Guidance

2018-03-01 Thread adrien ruffie
Hi all, on the slide 5 in the following link: https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1 The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)" Kafka Brokers but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for each broker ?

RE: difference between 2 options

2018-03-01 Thread adrien ruffie
Best regards, Andras On Mon, Feb 26, 2018 at 11:43 PM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > Hi Andras, > > > thank for your response ! > > For log.flush.offset.checkpoint.interval.ms we write out only one > recovery point for all logs ? > > But if I ha

RE: Zookeeper and Kafka JMX metrics

2018-02-28 Thread adrien ruffie
Hi Arunkumar, have you take a look if your MBean are exposed with Zookeeper thank to JVisualvm yet ? As like in my screen in attachment. regards Adrien De : Arunkumar Envoyé : mardi 27 février 2018 23:19:33 À :

RE: difference between 2 options

2018-02-26 Thread adrien ruffie
rectory to avoid recovering the whole log on startup. and every log.flush.start.offset.checkpoint.interval.ms we write out the current log start offset for all logs to a text file in the log directory to avoid exposing data that have been deleted by DeleteRecordsRequest HTH, Andras On Mon, Feb 26, 2018 at

difference between 2 options

2018-02-26 Thread adrien ruffie
Hello all, I have read linked porperties documentation, but I don't really understand the difference between: log.flush.offset.checkpoint.interval.ms and log.flush.start.offset.checkpoint.interval.ms Do you have a usecase of each property's utilization, I can't figure out what the

RE: replica.fetch.max.bytes split message or not ?

2018-02-25 Thread adrien ruffie
t make progress. https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes -hans > On Feb 25, 2018, at 8:04 AM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > > Hi Waleed, > > thank for you reply, that I thought too ! > but it was ju

RE: replica.fetch.max.bytes split message or not ?

2018-02-25 Thread adrien ruffie
t : Re: replica.fetch.max.bytes split message or not ? I would say you will get that 5th message in the next request. I don't believe under any circumstance a Kafka broker will send or receive a partial message. On Feb 24, 2018 10:52 AM, "adrien ruffie" <adriennolar...@hotmail.fr> wrote:

replica.fetch.max.bytes split message or not ?

2018-02-24 Thread adrien ruffie
Hello, I have found this description in only documentation (without taking into account spelling errors 'byes of messages'), a question stays in my mind ... what happens if the size does not fall correctly on a message number? Example if in one parition I have 10(messages) of 1024 bytes --> I

RE: difference between key.serializer & default.key.serde

2018-02-21 Thread adrien ruffie
alizer at once. Thus, the used config is `default.key.serde`. It's name uses prefix `default` as you can overwrite the Serde specified in the config at operator level. In Streams API, you usually handle more than one data type and thus usually need more than one Serde. -Matthias On 2/21/18 1:

difference between key.serializer & default.key.serde

2018-02-21 Thread adrien ruffie
Hello all I read the documentation but I not really understand the different between default.key.serde and key.serializer + key.deserializer and default.value.serde and value.serializer + value.deserializer I don't understand the differents usages ... Can you enlighten le a little more

RE: broker properties explanations

2018-02-21 Thread adrien ruffie
at may fill up one disk way before another. When log.flush.interval.ms is null the log.flush.interval.messages property is not used. With default settings, messages are written to disk immediately. Hope this helps. Tom Aley thomas.a...@ibm.com From: adrien ruffie <adriennolar...@h

broker properties explanations

2018-02-20 Thread adrien ruffie
Hello all, after reading several properties in Kafka documentations, I asked mysleft some questions ... these 2 following options are available: log.dir The directory in which the log data is kept (supplemental for log.dirs property)string /tmp/kafka-logs high log.dirs

RE: Kafka control messages

2018-02-19 Thread adrien ruffie
Hi Ben, it's depend on your consumer group configuration. If you have all consumer arein differents group (only one consumer for each consumer group), you can use 2) because all the consumer instances have different consumer group, then the control records will be broadcasted to all your

ey.converter: Class io.confluent.connect.avro.AvroConverter could not be found

2018-02-18 Thread adrien ruffie
Hello all, I have one kafka and a one schema-registry running on my home, but when I launched this command: I get the following stack: kafka_2.11-1.0.0/bin$ ./connect-standalone.sh /home/adryen/git/schema-registry/config/connect-avro-standalone.properties ../config/mysql-connect.properties

RE: Kafka connect mysql

2018-02-17 Thread adrien ruffie
put of the curl command, right ? On page 146, commands for Mac were given. What commands did you use to install mysql, etc on Debian ? Thanks On Sat, Feb 17, 2018 at 2:51 PM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > yes the fact that my jdbcSourceConne

RE: Kafka connect mysql

2018-02-17 Thread adrien ruffie
the globbing complaint ? Cheers On Sat, Feb 17, 2018 at 1:59 PM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > yes like suggested :-) but nothing, > > Debian 9 for the OS > > > thx Ted > > > De : Ted Yu <yuzhih...@gmail.co

RE: Kafka connect mysql

2018-02-17 Thread adrien ruffie
ou use ? Cheers On Sat, Feb 17, 2018 at 11:04 AM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > Hello all, > > > In Kafka the definitive guide, on page 146 I found the following command: > > > curl http://localhost:8083/connector-pl

Kafka connect mysql

2018-02-17 Thread adrien ruffie
Hello all, In Kafka the definitive guide, on page 146 I found the following command: curl http://localhost:8083/connector-plugins

RE: DumpLogSegment

2018-02-10 Thread adrien ruffie
2018 23:17:42 À : users@kafka.apache.org Objet : Re: DumpLogSegment I think this was due to the type of file you fed to the tool. To use --index-sanity-check , you need to supply file with the following suffix: val IndexFileSuffix = ".index" On Sat, Feb 10, 2018 at 2:09 PM,

RE: DumpLogSegment

2018-02-10 Thread adrien ruffie
ntln(s"$file passed sanity check.") Do you see the print above ? Which release of Kafka are you using ? Cheers On Sat, Feb 10, 2018 at 1:54 PM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > Hi all, > > In Kafka the definitive guide

DumpLogSegment

2018-02-10 Thread adrien ruffie
Hi all, In Kafka the definitive guide in page 200-201, two parameters of kafka.tools.DumpLogSegments appear not really work ... the --index-sanity-check argument the --print-data-log exemple: ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files

RE: mistake in Kafka the definitive guide

2018-02-08 Thread adrien ruffie
A, Partition 0" "Topic B, Partition 0" "Topic __conusmer_offsets" Paolo Baronti Dec 01, 2017 ____ De : adrien ruffie <adriennolar...@hotmail.fr> Envoyé : jeudi 8 février 2018 22:14:53 À : users@kafka.apache.org Objet : mistake i

RE: mistake in Kafka the definitive guide

2018-02-08 Thread adrien ruffie
istake in Kafka the definitive guide Books can contain errors... Check the reported Errata and add a new one if not reported yet: http://www.oreilly.com/catalog/errata.csp?isbn=0636920044123 -Matthias On 2/8/18 1:14 PM, adrien ruffie wrote: > Hello all, > > > I'm reading Kafka the defi

mistake in Kafka the definitive guide

2018-02-08 Thread adrien ruffie
Hello all, I'm reading Kafka the definitive guide and I suspect that found an error in the page 166 figure "Figure 8-5 a fail over causes committed offsets without matching records". In the figure we can't see Topic B ... specified in the box "Group C1, Topic B, Parition 0, Offset 6" ...

RE: Strange Topic ...

2018-02-04 Thread adrien ruffie
check the log-cleaner.log file to see if there was some clue. Cheers On Sun, Feb 4, 2018 at 11:14 AM, adrien ruffie <adriennolar...@hotmail.fr> wrote: > Hello all, > > > I'm a beginner in Kafka and this morning when I try some tests and when > running this following cmd

Strange Topic ...

2018-02-04 Thread adrien ruffie
Hello all, I'm a beginner in Kafka and this morning when I try some tests and when running this following cmd: ./bin kafka-topics.sh --zookeeper localhost:2181 --describe I understand my 3 created topic like "customer-topic", "streams-plaintext-input", and "streams-wordcount-output" But I