Re: Kafka

2020-10-01 Thread Akhilesh Pathodia
Hi,

You can use kafka connect to export topic data to external systems.

https://docs.confluent.io/3.1.1/connect/connect-jdbc/docs/sink_connector.html

Thanks,
Akhilesh Pathodia

On Thu, Oct 1, 2020 at 7:29 AM Spectrum lib  wrote:

> Hi All,
> I am using the Kafka version kafka_2.12-2.4.0. I have 1 Kafka broker.
>  I am having a topic with 1 partition.How can i push topic data in the
> MsSQL Database using any language Java,C# etc.
>
> Thank You in advance
>


Re: Kafka producer drops large messages

2017-04-11 Thread Akhilesh Pathodia
Hi Smirit,

You will have to change some of broker configuration like message.max.bytes
to a larger value. The default value is 1 MB guess.

Please check below configs:

Broker Configuration


   -

   message.max.bytes

   Maximum message size the broker will accept. Must be smaller than the
   consumer fetch.message.max.bytes, or the consumer cannot consume the
   message.

   Default value: 100 (1 MB)
   -

   log.segment.bytes

   Size of a Kafka data file. Must be larger than any single message.

   Default value: 1073741824 (1 GiB)
   -

   replica.fetch.max.bytes

   Maximum message size a broker can replicate. Must be larger than
   message.max.bytes, or a broker can accept messages it cannot replicate,
   potentially resulting in data loss.

   Default value: 1048576 (1 MiB)

Thanks,
Akhilesh

On Wed, Apr 12, 2017 at 12:23 AM, Smriti Jha  wrote:

> Hello all,
>
> Can somebody shed light on kafka producer's behavior when the total size of
> all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
> socket buffer size (send.buffer.bytes)?
>
> I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
> systems are dropping a few messages that are closer to 1MB in size. A few
> messages that are only a few KBs in size and are attempted to be sent
> around the same time as >1MB messages also get dropped. The official
> documentation does talk about never dropping a "send" in case the buffer
> has reached queue.buffering.max.messages but I don't think that applies to
> size of the messages.
>
> Thanks!
>


Re: Topic deletion

2017-04-07 Thread Akhilesh Pathodia
I am not sure but kafka delete command does not delete the topic actually,
it only marks it for deletion. Probably it is fixed in later version of
kafka.

On Fri, Apr 7, 2017 at 2:14 PM, Adrian McCague 
wrote:

> Hi Akhilesh,
>
> Why would this approach need to be taken over the kafka-topics tool, out
> of interest?
>
> Thanks
> Adrian
>
> -Original Message-
> From: Akhilesh Pathodia [mailto:pathodia.akhil...@gmail.com]
> Sent: 07 April 2017 09:37
> To: users@kafka.apache.org
> Subject: Re: Topic deletion
>
> Hi Adrian,
>
> You will have to delete the broker directory from zookeeper. This can be
> done  from zookeeper cli. Connect to zookeeper cli using below command:
>
> zookeeper-client -server 
>
> Then run below command :
>
> rmr /brokers/topics/
>
> Thanks,
> AKhilesh
>
> On Thu, Apr 6, 2017 at 11:03 PM, Adrian McCague 
> wrote:
>
> > Hi all,
> >
> > I am trying to understand topic deletion in kafka, there appears to be
> > very little documentation or answers on how this works. Typically they
> > just say to turn on the feature on the broker (in my case it is).
> >
> > I executed:
> > Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic
> >
> > Running this again yields:
> > Topic mytopic is already marked for deletion.
> >
> > --describe yields:
> > Topic: mytopic  PartitionCount:6ReplicationFactor:3 Configs:
> > retention.ms=0
> > Topic: mytopic  Partition: 0Leader: -1  Replicas:
> > 1006,1001,1005Isr:
> > Topic  mytopic  Partition: 1Leader: -1  Replicas:
> > 1001,1005,1003Isr:
> >Topic: mytopic  Partition: 2Leader: -1  Replicas:
> > 1005,1003,1004Isr:
> > Topic: mytopic  Partition: 3Leader: -1  Replicas:
> > 1003,1004,1007Isr:
> > Topic: mytopic  Partition: 4Leader: -1  Replicas:
> > 1004,1007,1006Isr:
> > Topic: mytopic  Partition: 5Leader: -1  Replicas:
> > 1007,1006,1001Isr:
> >
> > You can see that the deletion mark has meant that the Leader is -1.
> > Also I read somewhere that retention needs to be set to something low
> > to trigger the deletion, hence the config of retention.ms=0
> >
> > Consumers (or streams in my case) no longer see the topic:
> > org.apache.kafka.streams.errors.TopologyBuilderException: Invalid
> > topology building: stream-thread [StreamThread-1] Topic not found:
> > mytopic
> >
> > And I can't create a new topic in its place:
> > [2017-04-06 18:26:00,702] ERROR org.apache.kafka.common.
> errors.TopicExistsException:
> > Topic 'mytopic' already exists. (kafka.admin.TopicCommand$)
> >
> > I am a little lost as to where to go next, could someone explain how
> > topic deletion is actually applied when a topic is 'marked' for
> > deletion as that may help trigger it.
> >
> > Thanks!
> > Adrian
> >
> >
>


Re: Topic deletion

2017-04-07 Thread Akhilesh Pathodia
Hi Adrian,

You will have to delete the broker directory from zookeeper. This can be
done  from zookeeper cli. Connect to zookeeper cli using below command:

zookeeper-client -server 

Then run below command :

rmr /brokers/topics/

Thanks,
AKhilesh

On Thu, Apr 6, 2017 at 11:03 PM, Adrian McCague 
wrote:

> Hi all,
>
> I am trying to understand topic deletion in kafka, there appears to be
> very little documentation or answers on how this works. Typically they just
> say to turn on the feature on the broker (in my case it is).
>
> I executed:
> Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic
>
> Running this again yields:
> Topic mytopic is already marked for deletion.
>
> --describe yields:
> Topic: mytopic  PartitionCount:6ReplicationFactor:3 Configs:
> retention.ms=0
> Topic: mytopic  Partition: 0Leader: -1  Replicas:
> 1006,1001,1005Isr:
> Topic  mytopic  Partition: 1Leader: -1  Replicas:
> 1001,1005,1003Isr:
>Topic: mytopic  Partition: 2Leader: -1  Replicas:
> 1005,1003,1004Isr:
> Topic: mytopic  Partition: 3Leader: -1  Replicas:
> 1003,1004,1007Isr:
> Topic: mytopic  Partition: 4Leader: -1  Replicas:
> 1004,1007,1006Isr:
> Topic: mytopic  Partition: 5Leader: -1  Replicas:
> 1007,1006,1001Isr:
>
> You can see that the deletion mark has meant that the Leader is -1.
> Also I read somewhere that retention needs to be set to something low to
> trigger the deletion, hence the config of retention.ms=0
>
> Consumers (or streams in my case) no longer see the topic:
> org.apache.kafka.streams.errors.TopologyBuilderException: Invalid
> topology building: stream-thread [StreamThread-1] Topic not found: mytopic
>
> And I can't create a new topic in its place:
> [2017-04-06 18:26:00,702] ERROR 
> org.apache.kafka.common.errors.TopicExistsException:
> Topic 'mytopic' already exists. (kafka.admin.TopicCommand$)
>
> I am a little lost as to where to go next, could someone explain how topic
> deletion is actually applied when a topic is 'marked' for deletion as that
> may help trigger it.
>
> Thanks!
> Adrian
>
>


Re: log.retention.hours

2017-02-06 Thread Akhilesh Pathodia
Rebooting is not required for this config. After changing the configuration
wait for the time as set for log.retention.check.interval.ms and then
messages older than 1 hour should get deleted.

Thanks,
Akhilesh

On Mon, Feb 6, 2017 at 12:16 PM, Алексей Федосов 
wrote:

> Hello,
> I've just started working with the product apache kafka and from time to
> time I have different questions. Could you please clarify one moment - in
> the configuration file servers.properties I change value of the parameter
> log.retention.hours by 1. The new value of this parameter is applied
> instantly, without needing to reboot kafka's server? Or it's a incorrect
> assumption? The thing is I did not reboot kafka's service, however messages
> in the kafka's topic were left (like they were before changing the value of
> this parameter). Does this mean this parameter log.retention.hours is not
> working without rebooting kafka's service or I am doing smth wrong?
>


Re: how to reset kafka offset in zookeeper

2015-12-19 Thread Akhilesh Pathodia
What is the command  to delete  group from zookeeper? I dont find
/consumer/ directory? I am using cloudera, is there any place on cloudera
manager where I can delete the group?

Thanks

On Sat, Dec 19, 2015 at 11:47 PM, Todd Palino  wrote:

> If what you want to do is reset to smallest, all you need to do is stop the
> consumer, delete the group from Zookeeper, and restart the consumer. It
> will automatically create the group again.
>
> You only need to export the offsets first if you later need to reset to
> where you were in the partitions.
>
> -Todd
>
> On Saturday, December 19, 2015, Akhilesh Pathodia <
> pathodia.akhil...@gmail.com> wrote:
>
> > What is the process for deleting the consumer group from zookeeper?
> Should
> > I export offset, delete and then import?
> >
> > Thanks,
> > Akhilesh
> >
> > On Fri, Dec 18, 2015 at 11:32 PM, Todd Palino  > > wrote:
> >
> > > Yes, that’s right. It’s just work for no real gain :)
> > >
> > > -Todd
> > >
> > > On Fri, Dec 18, 2015 at 9:38 AM, Marko Bonaći <
> marko.bon...@sematext.com
> > >
> > > wrote:
> > >
> > > > Hmm, I guess you're right Tod :)
> > > > Just to confirm, you meant that, while you're changing the exported
> > file
> > > it
> > > > might happen that one of the segment files becomes eligible for
> cleanup
> > > by
> > > > retention, which would then make the imported offsets out of range?
> > > >
> > > > Marko Bonaći
> > > > Monitoring | Alerting | Anomaly Detection | Centralized Log
> Management
> > > > Solr & Elasticsearch Support
> > > > Sematext <http://sematext.com/> | Contact
> > > > <http://sematext.com/about/contact.html>
> > > >
> > > > On Fri, Dec 18, 2015 at 6:29 PM, Todd Palino  > > wrote:
> > > >
> > > > > That works if you want to set to an arbitrary offset, Marko.
> However
> > in
> > > > the
> > > > > case the OP described, wanting to reset to smallest, it is better
> to
> > > just
> > > > > delete the consumer group and start the consumer with
> > auto.offset.reset
> > > > set
> > > > > to smallest. The reason is that while you can pull the current
> > smallest
> > > > > offsets from the brokers and set them in Zookeeper for the
> consumer,
> > by
> > > > the
> > > > > time you do that the smallest offset is likely no longer valid.
> This
> > > > means
> > > > > you’re going to resort to the offset reset logic anyways.
> > > > >
> > > > > -Todd
> > > > >
> > > > >
> > > > > On Fri, Dec 18, 2015 at 7:10 AM, Marko Bonaći <
> > > marko.bon...@sematext.com 
> > > > >
> > > > > wrote:
> > > > >
> > > > > > You can also do this:
> > > > > > 1. stop consumers
> > > > > > 2. export offsets from ZK
> > > > > > 3. make changes to the exported file
> > > > > > 4. import offsets to ZK
> > > > > > 5. start consumers
> > > > > >
> > > > > > e.g.
> > > > > > bin/kafka-run-class.sh kafka.tools.ExportZkOffsets --group
> > group-name
> > > > > > --output-file /tmp/zk-offsets --zkconnect localhost:2181
> > > > > > bin/kafka-run-class.sh kafka.tools.ImportZkOffsets --input-file
> > > > > > /tmp/zk-offsets --zkconnect localhost:2181
> > > > > >
> > > > > > Marko Bonaći
> > > > > > Monitoring | Alerting | Anomaly Detection | Centralized Log
> > > Management
> > > > > > Solr & Elasticsearch Support
> > > > > > Sematext <http://sematext.com/> | Contact
> > > > > > <http://sematext.com/about/contact.html>
> > > > > >
> > > > > > On Fri, Dec 18, 2015 at 4:06 PM, Jens Rantil <
> jens.ran...@tink.se
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I noticed that a consumer in the new consumer API supports
> > setting
> > > > the
> > > > > > > offset for a partition to beginning. I assume doing so also
> would
> > > > > update
> > > 

Re: how to reset kafka offset in zookeeper

2015-12-19 Thread Akhilesh Pathodia
What is the process for deleting the consumer group from zookeeper? Should
I export offset, delete and then import?

Thanks,
Akhilesh

On Fri, Dec 18, 2015 at 11:32 PM, Todd Palino  wrote:

> Yes, that’s right. It’s just work for no real gain :)
>
> -Todd
>
> On Fri, Dec 18, 2015 at 9:38 AM, Marko Bonaći 
> wrote:
>
> > Hmm, I guess you're right Tod :)
> > Just to confirm, you meant that, while you're changing the exported file
> it
> > might happen that one of the segment files becomes eligible for cleanup
> by
> > retention, which would then make the imported offsets out of range?
> >
> > Marko Bonaći
> > Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> > Solr & Elasticsearch Support
> > Sematext <http://sematext.com/> | Contact
> > <http://sematext.com/about/contact.html>
> >
> > On Fri, Dec 18, 2015 at 6:29 PM, Todd Palino  wrote:
> >
> > > That works if you want to set to an arbitrary offset, Marko. However in
> > the
> > > case the OP described, wanting to reset to smallest, it is better to
> just
> > > delete the consumer group and start the consumer with auto.offset.reset
> > set
> > > to smallest. The reason is that while you can pull the current smallest
> > > offsets from the brokers and set them in Zookeeper for the consumer, by
> > the
> > > time you do that the smallest offset is likely no longer valid. This
> > means
> > > you’re going to resort to the offset reset logic anyways.
> > >
> > > -Todd
> > >
> > >
> > > On Fri, Dec 18, 2015 at 7:10 AM, Marko Bonaći <
> marko.bon...@sematext.com
> > >
> > > wrote:
> > >
> > > > You can also do this:
> > > > 1. stop consumers
> > > > 2. export offsets from ZK
> > > > 3. make changes to the exported file
> > > > 4. import offsets to ZK
> > > > 5. start consumers
> > > >
> > > > e.g.
> > > > bin/kafka-run-class.sh kafka.tools.ExportZkOffsets --group group-name
> > > > --output-file /tmp/zk-offsets --zkconnect localhost:2181
> > > > bin/kafka-run-class.sh kafka.tools.ImportZkOffsets --input-file
> > > > /tmp/zk-offsets --zkconnect localhost:2181
> > > >
> > > > Marko Bonaći
> > > > Monitoring | Alerting | Anomaly Detection | Centralized Log
> Management
> > > > Solr & Elasticsearch Support
> > > > Sematext <http://sematext.com/> | Contact
> > > > <http://sematext.com/about/contact.html>
> > > >
> > > > On Fri, Dec 18, 2015 at 4:06 PM, Jens Rantil 
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I noticed that a consumer in the new consumer API supports setting
> > the
> > > > > offset for a partition to beginning. I assume doing so also would
> > > update
> > > > > the offset in Zookeeper eventually.
> > > > >
> > > > > Cheers,
> > > > > Jens
> > > > >
> > > > > On Friday, December 18, 2015, Akhilesh Pathodia <
> > > > > pathodia.akhil...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I want to reset the kafka offset in zookeeper so that the
> consumer
> > > will
> > > > > > start reading messages from first offset. I am using flume as a
> > > > consumer
> > > > > to
> > > > > > kafka. I have set the kafka property kafka.auto.offset.reset to
> > > > > "smallest",
> > > > > > but it does not reset the offset in zookeeper and that's why
> flume
> > > will
> > > > > not
> > > > > > read messages from first offset.
> > > > > >
> > > > > > Is there any way to reset kafka offset in zookeeper?
> > > > > >
> > > > > > Thanks,
> > > > > > Akhilesh
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Jens Rantil
> > > > > Backend engineer
> > > > > Tink AB
> > > > >
> > > > > Email: jens.ran...@tink.se
> > > > > Phone: +46 708 84 18 32
> > > > > Web: www.tink.se
> > > > >
> > > > > Facebook <https://www.facebook.com/#!/tink.se> Linkedin
> > > > > <
> > > > >
> > > >
> > >
> >
> http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary
> > > > > >
> > > > >  Twitter <https://twitter.com/tink>
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *—-*
> > > *Todd Palino*
> > > Staff Site Reliability Engineer
> > > Data Infrastructure Streaming
> > >
> > >
> > >
> > > linkedin.com/in/toddpalino
> > >
> >
>
>
>
> --
> *—-*
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
>
>
>
> linkedin.com/in/toddpalino
>


how to reset kafka offset in zookeeper

2015-12-18 Thread Akhilesh Pathodia
Hi,

I want to reset the kafka offset in zookeeper so that the consumer will
start reading messages from first offset. I am using flume as a consumer to
kafka. I have set the kafka property kafka.auto.offset.reset to "smallest",
but it does not reset the offset in zookeeper and that's why flume will not
read messages from first offset.

Is there any way to reset kafka offset in zookeeper?

Thanks,
Akhilesh