> the broker.id needs to be set.
> >
> > Do we have any standards/script/way defined/preferred for doing this or
> > suggested by Kafka experts if we are not using EBS?
> >
> > Thanks and Regards,
> > Srinivas
> >
> > On Thu, Nov 15, 2018
is failed, new
> > > broker/instance
> > > > > > > spun-up in AWS get assigned with new broker.id. The issue is,
> > with
> > > > > this
> > > > > > > approach, we need to re-assign the topics/replications on to
> the
> > > new
> > >
n the older clusters for a transition period.
>
> Is it possible to use 2 different ports for the same protocol (PLAINTEXT)
> in the broker configuration? Can I simply put 2 connection strings in the
> *listeners* config?
>
> Thank you!
> Dan
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
her:180 - Sending fetch for partitions
> [apptivodb8-campaign-tracker-email-0] to broker xx.xx.xx.xx:9092 (id: 2
> rack: null)
> DEBUG Fetcher:180 - Sending fetch for partitions
> [apptivodb5-campaign-tracker-email-1] to broker xx.xx.xx.xx:9092 (id: 2
> rack: null)
>
>
>
> > ERROR logs.
> >
> > When I analyzed in log file debug logs are taking more space. So i'm
> having
> > disk space issue.
> >
> > I'm using *log4j.properties* for managing the logs, Now I want to remove
> > the DEBUG logs from my logger file.
> >
t; > Has anybody else run into this problem and found a good solution? I'm
> > interested to hear any other solutions for tearing down and rebuilding
> SSL
> > connections on the fly.
> >
> >
> > Thanks,
> > Alex
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
Parmar <sunilosu...@gmail.com> wrote:
> >
> > We're using 0.9 ( CDH ) and consumer offsets are stored within Kafka.
> What
> > is the preferred way to get consumer offset from code or script for
> > monitoring ? Is there any sample code/ script to do so ?
> &
n 0.11.0.
> In
> > > addition, Matthias have been very active in community activity that
> goes
> > > beyond mailing list: he's getting the close to 1000 up votes and 100
> > > helpful flags on SO for answering almost all questions about Kafka
> > Streams.
> > >
> > > Thank you for your contribution and welcome to Apache Kafka, Matthias!
> > >
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC
> >
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
as
> CLIENT, PLAINTEXT, SASL/SSL, etc. I see the encryption part of the
> documentation, but is it just inferred what these listeners apply to?
>
> Thank you in advance!
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
values about retention
> time in LogManager are static:
> https://github.com/apache/kafka/blob/0.10.0/core/src/
> main/scala/kafka/server/KafkaServer.scala#L597-L620
>
> Kafka version: kafka_2.11-0.10.0.1
>
> Thanks
> --
> haitao.yao
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
not give anything ..
>
> regards.
>
> On Tue, Jul 25, 2017 at 9:50 AM, Kaufman Ng <kauf...@confluent.io> wrote:
>
>> Confluent Schema Registry is available in the DC/OS Universe, see here
>> for the package definitions https://github.com
>> /mesosphere/univer
;
> regards.
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
ot; <satyavath...@imimobile.com>
> wrote:
>
> > Hi,
> >I have created a topic with 500 partitions in 3 node
> > cluster with replication factor 3. kafka version is 0.11. I executed lsof
> > command and it lists more 1 lakh open files. why these many open f
authenticate with SASL
> and
> > use SSL for encryption?
> >
> > If the latter is true, then is it correct to assume that encryption will
> > take place using SSL if a client authenticates using a Kerberos ticket so
> > long as they have a trust store configured?
> >
> > Thank you.
> >
> > Waleed
> >
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
I have seen some github projects like kalinka (
> https://github.com/dcsolutions/kalinka) and also seen that I could do with
> Apache Camel.
>
> I would like to ask you about some experience or advice you can provide in
> this bridging between ActiveMQ and Kafka.
>
> Thanks in
?
> I can t see why it would not be possible from a technical point of view...
>
> Cheers
> Nico
>
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
re its own specific way of keep tracking of offsets.
On Sat, Mar 4, 2017 at 1:23 AM, VIVEK KUMAR MISHRA 13BIT0066 <
vivekkumar.mishra2...@vit.ac.in> wrote:
> Hi All,
>
> I want to create my own kafka connector which will connect multiple data
> source.
> Could anyone please help me
anks.
>
>
>
> Yuanjia Li
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
nd$.main(TopicCommand.scala:53)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
>
>
>
>
>
> **
>
> *Regards,*
> *Laxmi Narayan Patel*
> *MCA NIT Durgapur (2011-2014)*
> *Mob:-9741292048,8345847473*
>
--
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io
. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Mana
port, do we need a port forwarding from this configured
> port to the actual broker port within the container so that the broker
> itself can also find itself, right?
>
> thanks.
> regards, aki
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
nd a Avro payloads within Kafka messages using
> script "kafka-console-producer.sh"? Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
;> >>> That is;
> >> >>>
> >> >>> builder.stream(sourceTopic).to(targetTopic)
> >> >>>
> >> >>> Once merged I no longer require the sourceTopic. I want to delete
> it.
> >> >>>
> >> >>> How can I do that programatically in java? I use highelevel client
> >> APIs,
> >> >>> kafka v 0.10.0.1
> >> >>>
> >> >>>
> >> >>> Thanks
> >> >>> --
> >> >>> -Ratha
> >> >>> http://vvratha.blogspot.com/
> >> >>>
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > -Ratha
> >> > http://vvratha.blogspot.com/
> >>
> >
> >
> >
> > --
> > -Ratha
> > http://vvratha.blogspot.com/
> >
>
>
>
> --
> -Ratha
> http://vvratha.blogspot.com/
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
et consumer offset by
> *kafka-run-class kafka.tools.ConsumerOffsetChecke, *but can not find the
> offset in zookeeper path /consumer/x.
>
> I wonder where kafka stores consumer offset in version 0.9.0.0, Is there
> anything wrong I did.
> any help would be appreciated. thank you~
>
--
Kaufman Ng
Solutions Architect | Confluent
+1 646 961 8063 | @kaufmanng
www.confluent.io
would be greatly
> > appreciated.
> >
> > Thanks
> >
> > On Mon, Sep 19, 2016 at 8:18 AM, Vadim Keylis <vkeylis2...@gmail.com>
> wrote:
> >
> >> Good morning. Which benchmarking tools we should use to compare
> >> performance of 0.8 and
gt;
> 发件人: tong...@csbucn.com
> 发送时间: 2016-06-02 09:19
> 收件人: users
> 主题: Kafka forum register
> Hello,
>
> My project is using kafka ,and I want register a user in the forum,what
> can I do ?
>
>
>
> Tong SS
>
--
Kaufman Ng | Solutions Architect | Confluent
kauf...@confluent.io | +1 646 961 8063
only happens when the topic does not exist. When we restart the
> failing consumer it then can connect correctly to the topic and consume it.
> How can this error be prevented?
>
> Best regards
>
> Patrick
--
Kaufman Ng | Solutions Architect | Confluent
kauf...@confluent.io | +1 646 961 8063
27 matches
Mail list logo