Gaurav,
Il giorno gio 15 giu 2023 alle ore 15:27 Gaurav Pande
ha scritto:
>
> Hi Divij,
>
> Thanks a lot for detailed explanation.
>
> One last thing(stupid question) if you don't mind :
>
> Presently I have only a single Zookeeper running in standalone mode, My
> query is if I standup 2 new zk n
The official way to fix it is here
https://issues.apache.org/jira/browse/ZOOKEEPER-3056
Basically we have a flag to allow the boot even in that case.
I suggest you to upgrade to latest 3.5.8 and not to 3.5.7
Enrico
Il Gio 3 Set 2020, 03:51 Rijo Roy ha scritto:
> Hi Manoj,
> I just faced it ye
Hi,
I have been running kafka with zk 3.5.4 for quite a long time without issue.
I am testing with 3.5.5 but I don't expect any problem.
Hope that helps
Enrico
Il gio 6 giu 2019, 01:52 Sebastian Schmitz <
sebastian.schm...@propellerhead.co.nz> ha scritto:
> Thx Mark for clarification. That's wha
Tests of 3.4 branch are passing on jdk11.
Personally I have only experience of 3.5 in production with jdk11.
Enrico
Il giorno mer 30 gen 2019, 20:12 Mark Anderson ha
scritto:
> Hi all,
>
> Since Kafka 2.1 supports Java 11 we are considering moving to take
> advantage of performance improvement
Here it is !
https://issues.apache.org/jira/browse/KAFKA-6828
Thank you
Enrico
2018-04-24 20:53 GMT+02:00 Ismael Juma :
> A JIRA ticket would be appreciated. :)
>
> Ismael
>
> On Sat, Apr 21, 2018 at 12:51 AM, Enrico Olivelli
> wrote:
>
> > Il sab 21 apr 2018, 06
much availability of disk space.
Should I file a JIRA or you will do?
Enrico
> Ismael
>
> On Fri, Apr 20, 2018 at 12:41 PM, Enrico Olivelli
> wrote:
>
> > Il ven 20 apr 2018, 20:24 Ismael Juma ha scritto:
> >
> > > Hi Enrico,
> > >
> >
without any reason, it took time to
understand the real cause. Broker will crash without much 'log' as disk is
out of space.
Maybe it would be useful to add some notice about this potential problem
during the upgrade of the jdk.
Hope that helps
Enrico
> Ismael
>
> On Fri,
It is a deliberate change in JDK code
Just for reference see this discussion on nio-dev list on OpenJDK
http://mail.openjdk.java.net/pipermail/nio-dev/2018-April/005008.html
see
https://bugs.openjdk.java.net/browse/JDK-8168628
Cheers
Enrico
2018-03-05 14:29 GMT+01:00 Enrico Olivelli
Enrico Olivelli :
> The only fact I have found is that with Java8 Kafka is creating "SPARSE"
> files and with Java9 this is not true anymore
>
> Enrico
>
> 2018-03-05 12:44 GMT+01:00 Enrico Olivelli :
>
>> Hi,
>> This is a very strage case. I have a Kafk
The only fact I have found is that with Java8 Kafka is creating "SPARSE"
files and with Java9 this is not true anymore
Enrico
2018-03-05 12:44 GMT+01:00 Enrico Olivelli :
> Hi,
> This is a very strage case. I have a Kafka broker (part of a cluster of 3
> brokers) which can
Hi,
This is a very strage case. I have a Kafka broker (part of a cluster of 3
brokers) which cannot start upgrading Java from Oracle JDK8 to Oracle JDK
9.0.4.
There are a lot of .index and .timeindex files taking 10MB, they are for
empty partiions.
Running with Java 9 the server seems to rebuild
Hi,
I'm using Java Consumer API (latest version, 0.10.1.0), I store
consumed offsets offline, that is that I do not use kafka builtin
offset storage nor consumer groups.
I need to read data from different topics, but from one topic at once,
I would like to pool my consumers (using something like A
Hi Jason,
I see in the release notes that this issue seems to be fixed, but is
marked as 'duplicate'
https://issues.apache.org/jira/browse/KAFKA-156
Maybe you can consider removing this kind of un-fixed issues from the
release notes.
In my case I'm waiting for that fix and I 'was' very happy to se
way to configure a fixed schema to be attached to any record ?
Thanks
Enrico
2016-09-20 8:46 GMT+02:00 Enrico Olivelli - Diennea
:
> Hi,
> I'm trying to use the Confluent JDBC Sink as Sri is doing but without a
> schema.
> I do not want to write "schema" + "paylo
at
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:266)
at
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:175)
at
org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(Wo
Hi,
Can someone help on this topic?
Thank you
Enrico
Il Mer 27 Lug 2016 12:26 Enrico Olivelli - Diennea <
enrico.olive...@diennea.com> ha scritto:
> Hi,
> I'm running Kafka launching KafkaServerStartable inside my JVM (I'm using
> version 0.10.0.0).
>
> I'
t it as a blackbox, as far as it is
possible
Thank you
--
Enrico Olivelli
Software Development Manager @Diennea
Tel.: (+39) 0546 066100 - Int. 925
Viale G.Marconi 30/14 - 48018 Faenza (RA)
MagNews - E-mail Marketing Solutions
http://www.magnews.it
Diennea - Digital Marketi
ote:
>
> > Enrico,
> >
> > I dint quite get it. Can you please elaborate?
> >
> > - Shekar
> >
> > On Sun, Jun 26, 2016 at 12:06 AM, Enrico Olivelli
> > wrote:
> >
> >> Hi,
> >> I think you should call 'get
ll,
> >>
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=0,client_id=testClientId},
> >> body={topics=[test]}), isInitiatedByNetworkClient,
> >> createdTimeMs=1466799624967, sendTimeMs=0) to node -1
> >> 16:20:25.021 [kafka-producer-network-thread | testClientId] DEBUG
> >> org.apache.kafka.clients.Metadata - Updated cluster metadata version 2
> to
> >> Cluster(nodes = [Node(0, cmp-arch-kafka-01d.something.com, 9092)],
> >> partitions = [Partition(topic = test, partition = 0, leader = 0,
> replicas =
> >> [0,], isr = [0,]])
> >> 16:20:25.035 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.apache.kafka.clients.NetworkClient - Initiating connection to node 0
> at
> >> cmp-arch-kafka-01d.something.com:9092.
> >> 16:20:25.041 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> node-0.bytes-sent
> >> 16:20:25.041 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> >> node-0.bytes-received
> >> 16:20:25.042 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name node-0.latency
> >> 16:20:25.042 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.apache.kafka.clients.NetworkClient - Completed connection to node 0
> >> 16:20:25.042 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> >> topic.test.records-per-batch
> >> 16:20:25.043 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> topic.test.bytes
> >> 16:20:25.043 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> >> topic.test.compression-rate
> >> 16:20:25.043 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> >> topic.test.record-retries
> >> 16:20:25.043 [kafka-producer-network-thread | testClientId] DEBUG
> >> o.a.kafka.common.metrics.Metrics - Added sensor with name
> >> topic.test.record-errors
> >>
> >>
> >>
> >>
> >> when I lower the backoff to say 50ms
> >>
> >> I get a logged exception at the start:
> >>
> >> 16:24:30.743 [main] DEBUG o.a.k.clients.producer.KafkaProducer -
> >> Exception occurred during message send:
> >> org.apache.kafka.common.errors.TimeoutException: Failed to update
> >> metadata after 50 ms.
> >>
> >>
> >>
> >>
> >> my thought is that the producer is trying to sync metadata but it is
> lazy
> >> about it and doesn't try until a message is sent. But then if other
> >> messages are in that batch since I have linger set to 1ms, they are
> lost as
> >> well. This is apparently solved by setting a tiny delay so that the
> >> messages are sent in 2 batches. Interestingly, to me, after that delay
> is
> >> added, the previously lost first message comes back.
> >>
> >> What I'd like to see is the producer establish all it's setup metadata
> >> syncing on startup.
> >>
> >> Am I missing something here?
> >>
> >>
> >>
> >>
> >>
> >> > On Jun 24, 2016, at 4:05 PM, Shekar Tippur wrote:
> >> >
> >> > Hello,
> >> >
> >> > I have a simple Kafka producer directly taken off of
> >> >
> >> >
> >>
> https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
> >> >
> >> > I have changed the bootstrap.servers property.
> >> >
> >> > props.put("bootstrap.servers", "localhost:9092");
> >> >
> >> > I dont see any events added to the test topic.
> >> >
> >> > console-producer works fine with broker localhost:9092.
> >> >
> >> > *I see that if I change *props.put("metadata.fetch.timeout.ms",100);
> >> >
> >> > the wait reduces but I still dont see any events in the topic.
> >> >
> >> > Can someone please explain what could be going on?
> >> >
> >> > - Shekar
> >>
> >>
> >
>
--
-- Enrico Olivelli
f Kafka it will house the info needed to populate
> > bootstrap.servers, a wrapper will be placed around the Kafka producer and
> > will watch this ZK value. When the value will change the producer
> instance
> > with the old value will be shut down and a new producer with the new
> > bootstrap.servers info will replace it.
> > >>
> > >> Is there a best practice for achieving this?
> > >>
> > >> Is there a way to dynamically update bootstrap.servers?
> > >>
> > >> Does the producer always go to the same machine from bootstrap.servers
> > when it refreshes the MetaData after metadata.max.age.ms has expired?
> > >>
> > >> Thanks!
> > >
> >
>
--
-- Enrico Olivelli
ges ( using log compaction to save space ). You'd need to think
about all the edge cases and race conditions.
Dave
On Apr 29, 2016, at 03:32, Enrico Olivelli - Diennea
mailto:enrico.olive...@diennea.com>> wrote:
Hi,
I would like to use Kafka as transaction log in order to suppor
ag "close other
producers for this partition"
- Producer A gets a "ProducerFencedException" and known that someone else
became the new leader
That do you think?
Is it interesting for others ?
Is there any way to achive this goal using actual Kafka features ?
Thank you
--
Enric
> On Wed, Dec 23, 2015 at 1:08 AM, Enrico Olivelli - Diennea <
> enrico.olive...@diennea.com> wrote:
>
> > Hi,
> > I'm running a brand new Kafka cluster (version 0.9.0.0). During my
> > tests
> I
> > noticed this error at Consumer.partitionsFor during a
g.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
at
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
at
org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1143)
at
mag
?
thanks
Enrico Olivelli
Software Development Manager @Diennea
Tel.: (+39) 0546 066100 - Int. 925
Viale G.Marconi 30/14 - 48018 Faenza (RA)
MagNews - E-mail Marketing Solutions
http://www.magnews.it<http://www.magnews.it/>
Diennea - Digital Marketing Solutions
http://www.diennea.co
25 matches
Mail list logo