Re: Kafka Streams: Is it possible to set heap size for the JVMs running Kafka Streams?

2017-04-06 Thread Tianji Li
Thanks very much Eno for the promptly help. Very appreciated!

Tianji

On Thu, Apr 6, 2017 at 7:23 PM, Eno Thereska  wrote:

> Hi Tianji,
>
> Yes, with some caveats. First, to set the heap size of the JVM, you can
> treat it just like any other Java program, e.g., see
> http://stackoverflow.com/questions/27681511/how-do-i-
> set-the-java-options-for-kafka  questions/27681511/how-do-i-set-the-java-options-for-kafka> (the answer
> that starts with "You can pass java parameters from command line".
>
> For a guideline on stream-specific setting see this thread:
> https://groups.google.com/forum/?pli=1#!searchin/confluent-platform/heap$
> 20size$20streams%7Csort:relevance/confluent-platform/
> jazg8f-qn0M/hulJwk60AAAJ  forum/?pli=1#!searchin/confluent-platform/heap$20size$20streams|sort:
> relevance/confluent-platform/jazg8f-qn0M/hulJwk60AAAJ>.
>
> One caveat is that RocksDb uses off-heap memory, but that can also be
> tuned.
>
> Thanks
> Eno
>
> > On 6 Apr 2017, at 14:23, Tianji Li  wrote:
> >
> > If so, how to?
> >
> > Thanks
> > Tianji
>
>


Are consumer coordinators not replicated in case of failure?

2017-04-06 Thread Daniel Hinojosa
Hey all,

Question. I have three brokers.  I also have 3 consumers on their own
thread consuming 3 partitions of the same topic "scaled-states".  Here are
the configs when I run:

kafka-topics.sh --describe --topic 'scaled-cities' --zookeeper zoo2:2181

Topic:scaled-cities PartitionCount:3 ReplicationFactor:3 Configs:

Topic: scaled-cities Partition: 0 Leader: 1 Replicas: 0,1,2 Isr: 1,2,0

Topic: scaled-cities Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0

Topic: scaled-cities Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0

My consumer loops uses all three brokers for failover

props.put("bootstrap.servers", "kaf0:9092, kaf1:9092, kaf2:9092");

But when I stop a broker where the consumer coordinator happens to be,
in this case kaf0, the consumers will stop and no longer consume the
messages, and the following is displayed

[pool-1-thread-1] INFO
org.apache.kafka.clients.consumer.internals.AbstractCoordinator -
Marking the coordinator kaf0:9092 (id: 2147483647 rack: null) dead for
group consumerGroupAlpha

Anyone know why the consumer coordinator is not on another broker to
help out.  These consumers will not work and process any other
messages for this group until I turn that one broker back and this
doesn't feel like this should be by design.

Thanks and Appreciate the Help


recommendation on kafka go client : sarama or confluent-kafka-go

2017-04-06 Thread Yu Yang
Hi all,

We need to write  a service in Go to consume from kafka. There are two
popular kafka clients:  sarama  and
confluent-kafka-go .
Sarama client is written purely in Go. This allow it to show the full Go
stacktrace in case of an exception. That can be helpful for debugging.
Confluent-kafka-go wraps around librdkafka. That potentially can provide
better performance. Any suggestions on choosing between these two?

Thanks!

Regards,
-Yu


Re: Kafka connector

2017-04-06 Thread Hans Jespersen
If you want both N2 and N3 to get all the same messages (rather than each
getting an exclusive partitioned subset of the data) then you need to
configure N2 and N3 to be in unique Kafka consumer groups which I believe
is driven off the "name" of the N2 and N3 connectors. Make sure N2 and N3
have different names.

-hans

/**
 * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
 * h...@confluent.io (650)924-2670
 */

On Thu, Apr 6, 2017 at 4:26 PM, Tushar Sudhakar Jee 
wrote:

> Hello Sir/Ma'am,
> I was trying to write a simple case of using kafka connector. My setup
> involves using three nodes N1,N2 and N3.
> N1 is the source and N2, N3 are the sink nodes in my case.
> I am writing data to a text file(say input.txt) on Node N1 and using the
> standalone kafka connector I wish to see a text file with content similar
> to input.txt on the nodes N2 and N3.
>
> I am using the REST API to make changes in* topic name, file name and
> tasks.max*.
>
> However, during the experiments I ran I was unable to get a complete copy
> of the input.txt on both nodes(N2 and N3) at the *same time. *
> *Also tuning the value of tasks.max on nodes (N2 and N3) for the sink
> connector decided on which node data would be sent. *
>
> So, my question is whether I am wrong in expecting such an outcome?
> If so then what should I be expecting as a result of the experiment?
> If not then how do I get my desired outcome?
>
>
> Regards,
>
> --
>
> *Tushar Sudhakar Jee *| Software Engineer
>
> c *424.535.8225 <(424)%20535-8225>* | tus...@levyx.com 
>
> [image: Levyx-Logo-Final 9%]
>
> 49 Discovery, Suite #220
>
> Irvine, CA 92618
>
> *www.levyx.com *
>
>
> Levyx | 49 Discovery, Suite #220 | Irvine, CA 92618 | www.levyx.com
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please let us know by e-mail
> reply and delete it from your system; copying this message or disclosing
> its contents to anyone is strictly prohibited.
>


Are consumer coordinators not replicated in case of failure?

2017-04-06 Thread Daniel Hinojosa
Hey all,

Question. I have three brokers.  I also have 3 consumers on their own
thread consuming 3 partitions of the same topic "scaled-states".  Here are
the configs when I run kafka-topics.sh --describe --topic 'scaled-cities'
--zookeeper zoo2:2181

Topic:scaled-cities PartitionCount:3 ReplicationFactor:3 Configs:

Topic: scaled-cities Partition: 0 Leader: 1 Replicas: 0,1,2 Isr: 1,2,0

Topic: scaled-cities Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0

Topic: scaled-cities Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0

My consumer loops uses all three brokers for failover

props.put("bootstrap.servers", "kaf0:9092, kaf1:9092, kaf2:9092");

But when I yank, in this case kaf0, from the list which happens to be
where the consumer coordinator happens to be.

The consumers will stop and no longer consumer the messages:

[pool-1-thread-1] INFO
org.apache.kafka.clients.consumer.internals.AbstractCoordinator -
Marking the coordinator kaf0:9092 (id: 2147483647 rack: null) dead for
group consumerGroupAlpha

Anyone know why the consumer coordinator is not on another broker to
help out.  These consumers will not work process any other messages
for this group until I turn that one broker back and this doesn't feel
like this should be by design.

Thanks and Appreciate the Help


Kafka connector

2017-04-06 Thread Tushar Sudhakar Jee
Hello Sir/Ma'am,
I was trying to write a simple case of using kafka connector. My setup
involves using three nodes N1,N2 and N3.
N1 is the source and N2, N3 are the sink nodes in my case.
I am writing data to a text file(say input.txt) on Node N1 and using the
standalone kafka connector I wish to see a text file with content similar
to input.txt on the nodes N2 and N3.

I am using the REST API to make changes in* topic name, file name and
tasks.max*.

However, during the experiments I ran I was unable to get a complete copy
of the input.txt on both nodes(N2 and N3) at the *same time. *
*Also tuning the value of tasks.max on nodes (N2 and N3) for the sink
connector decided on which node data would be sent. *

So, my question is whether I am wrong in expecting such an outcome?
If so then what should I be expecting as a result of the experiment?
If not then how do I get my desired outcome?


Regards,

-- 

*Tushar Sudhakar Jee *| Software Engineer

c *424.535.8225* | tus...@levyx.com 

[image: Levyx-Logo-Final 9%]

49 Discovery, Suite #220

Irvine, CA 92618

*www.levyx.com *

-- 


Levyx | 49 Discovery, Suite #220 | Irvine, CA 92618 | www.levyx.com

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this email in error please let us know by e-mail reply 
and delete it from your system; copying this message or disclosing its 
contents to anyone is strictly prohibited.


Re: Kafka Streams: Is it possible to set heap size for the JVMs running Kafka Streams?

2017-04-06 Thread Eno Thereska
Hi Tianji,

Yes, with some caveats. First, to set the heap size of the JVM, you can treat 
it just like any other Java program, e.g., see 
http://stackoverflow.com/questions/27681511/how-do-i-set-the-java-options-for-kafka
 

 (the answer that starts with "You can pass java parameters from command line". 

For a guideline on stream-specific setting see this thread: 
https://groups.google.com/forum/?pli=1#!searchin/confluent-platform/heap$20size$20streams%7Csort:relevance/confluent-platform/jazg8f-qn0M/hulJwk60AAAJ
 
.
 

One caveat is that RocksDb uses off-heap memory, but that can also be tuned.

Thanks
Eno

> On 6 Apr 2017, at 14:23, Tianji Li  wrote:
> 
> If so, how to?
> 
> Thanks
> Tianji



Kafka Streams: Is it possible to set heap size for the JVMs running Kafka Streams?

2017-04-06 Thread Tianji Li
If so, how to?

Thanks
Tianji


Re: Topic deletion

2017-04-06 Thread Zakee
What version of kafka you are on? There are issues with delete topic until I 
guess kafka 0.10.1.0

You may be hitting this issue… https://issues.apache.org/jira/browse/KAFKA-2231 



Thanks
Zakee



> On Apr 6, 2017, at 11:20 AM, Adrian McCague  wrote:
> 
> Hi Sachin,
> 
> I am told with confidence that the setting is present in server.properties (I 
> do not have access to the server).
> 
> I have used delete before without issue, and attempted to use it when the 
> setting is false and the behaviour is different. I'm certain that running the 
> command multiple times always shows the warning, that without the setting 
> being true, the command has no effect. In this case it confirms that the 
> topic is marked for deletion.
> 
> Thanks
> Adrian
> 
> -Original Message-
> From: Sachin Mittal [mailto:sjmit...@gmail.com] 
> Sent: 06 April 2017 19:05
> To: users@kafka.apache.org
> Subject: Re: Topic deletion
> 
> Do you have delete.topic.enable=true uncommented or present in 
> server.properties
> 
> On Thu, Apr 6, 2017 at 11:03 PM, Adrian McCague 
> wrote:
> 
>> Hi all,
>> 
>> I am trying to understand topic deletion in kafka, there appears to be 
>> very little documentation or answers on how this works. Typically they 
>> just say to turn on the feature on the broker (in my case it is).
>> 
>> I executed:
>> Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic
>> 
>> Running this again yields:
>> Topic mytopic is already marked for deletion.
>> 
>> --describe yields:
>> Topic: mytopic  PartitionCount:6ReplicationFactor:3 Configs:
>> retention.ms=0
>>Topic: mytopic  Partition: 0Leader: -1  Replicas:
>> 1006,1001,1005Isr:
>>Topic  mytopic  Partition: 1Leader: -1  Replicas:
>> 1001,1005,1003Isr:
>>   Topic: mytopic  Partition: 2Leader: -1  Replicas:
>> 1005,1003,1004Isr:
>>Topic: mytopic  Partition: 3Leader: -1  Replicas:
>> 1003,1004,1007Isr:
>>Topic: mytopic  Partition: 4Leader: -1  Replicas:
>> 1004,1007,1006Isr:
>>Topic: mytopic  Partition: 5Leader: -1  Replicas:
>> 1007,1006,1001Isr:
>> 
>> You can see that the deletion mark has meant that the Leader is -1.
>> Also I read somewhere that retention needs to be set to something low 
>> to trigger the deletion, hence the config of retention.ms=0
>> 
>> Consumers (or streams in my case) no longer see the topic:
>> org.apache.kafka.streams.errors.TopologyBuilderException: Invalid 
>> topology building: stream-thread [StreamThread-1] Topic not found: 
>> mytopic
>> 
>> And I can't create a new topic in its place:
>> [2017-04-06 18:26:00,702] ERROR 
>> org.apache.kafka.common.errors.TopicExistsException:
>> Topic 'mytopic' already exists. (kafka.admin.TopicCommand$)
>> 
>> I am a little lost as to where to go next, could someone explain how 
>> topic deletion is actually applied when a topic is 'marked' for 
>> deletion as that may help trigger it.
>> 
>> Thanks!
>> Adrian
>> 
>> 



Affordable Wireless Plans
Set up is easy. Get online in minutes.
Starting at only $14.95 per month! 
www.netzero.net?refcd=nzmem0216


RE: Topic deletion

2017-04-06 Thread Adrian McCague
Hi Sachin,

I am told with confidence that the setting is present in server.properties (I 
do not have access to the server).

I have used delete before without issue, and attempted to use it when the 
setting is false and the behaviour is different. I'm certain that running the 
command multiple times always shows the warning, that without the setting being 
true, the command has no effect. In this case it confirms that the topic is 
marked for deletion.

Thanks
Adrian

-Original Message-
From: Sachin Mittal [mailto:sjmit...@gmail.com] 
Sent: 06 April 2017 19:05
To: users@kafka.apache.org
Subject: Re: Topic deletion

Do you have delete.topic.enable=true uncommented or present in server.properties

On Thu, Apr 6, 2017 at 11:03 PM, Adrian McCague 
wrote:

> Hi all,
>
> I am trying to understand topic deletion in kafka, there appears to be 
> very little documentation or answers on how this works. Typically they 
> just say to turn on the feature on the broker (in my case it is).
>
> I executed:
> Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic
>
> Running this again yields:
> Topic mytopic is already marked for deletion.
>
> --describe yields:
> Topic: mytopic  PartitionCount:6ReplicationFactor:3 Configs:
> retention.ms=0
> Topic: mytopic  Partition: 0Leader: -1  Replicas:
> 1006,1001,1005Isr:
> Topic  mytopic  Partition: 1Leader: -1  Replicas:
> 1001,1005,1003Isr:
>Topic: mytopic  Partition: 2Leader: -1  Replicas:
> 1005,1003,1004Isr:
> Topic: mytopic  Partition: 3Leader: -1  Replicas:
> 1003,1004,1007Isr:
> Topic: mytopic  Partition: 4Leader: -1  Replicas:
> 1004,1007,1006Isr:
> Topic: mytopic  Partition: 5Leader: -1  Replicas:
> 1007,1006,1001Isr:
>
> You can see that the deletion mark has meant that the Leader is -1.
> Also I read somewhere that retention needs to be set to something low 
> to trigger the deletion, hence the config of retention.ms=0
>
> Consumers (or streams in my case) no longer see the topic:
> org.apache.kafka.streams.errors.TopologyBuilderException: Invalid 
> topology building: stream-thread [StreamThread-1] Topic not found: 
> mytopic
>
> And I can't create a new topic in its place:
> [2017-04-06 18:26:00,702] ERROR 
> org.apache.kafka.common.errors.TopicExistsException:
> Topic 'mytopic' already exists. (kafka.admin.TopicCommand$)
>
> I am a little lost as to where to go next, could someone explain how 
> topic deletion is actually applied when a topic is 'marked' for 
> deletion as that may help trigger it.
>
> Thanks!
> Adrian
>
>


Replica selection at the time of topic creation

2017-04-06 Thread Ramanan, Buvana (Nokia - US/Murray Hill)
Hello,

I tried to understand the algorithm used in choosing a leader of a topic / 
partition at the time of creation - rack unaware mode.

I went thru TopicCommand.scala and AdminUtils.scala, especially 
assignReplicasToBrokersRackUnaware() function (pasted below for convenience) 
and figured that the leader for a topic is chosen randomly and the first 
follower is chosen based on an initial random shift from the leader and the 
other followers  are chosen based off of the first follower by incrementing the 
first follower id.

Can someone please confirm that my understanding is correct?

I note that for the creation a topic with N number of partitions, the 
assignment algorithm will arrive at a balanced distribution of the number of 
partitions wrt the replicas. Whereas for the creation of N topics with 1 
partition each, the resulting replica assignment may NOT result in a balanced 
distribution of the number of topics wrt the replicas.

The next part of my question is - are there plans to include an algorithm that 
does the leader & follower assignment based the prevailing load of the brokers? 
Suppose I add a new broker to the cluster and start creating topics, I would 
expect the new broker as the leader for the newly created topics. However, in 
my experiments this does not turn out to be the case - there is no bias towards 
the new broker as the leader.

Are there any ways to introduce a bias towards choosing the new broker as the 
leader for the creation of next several topics?

I understand that one can (a) specify the replicas list for each topic during 
topic creation (b) use kafka-reassign-partitions.sh to do a reassignment of 
partitions and (c) set auto.leader.rebalance.enable=true.

Thanks,
Buvana

private def assignReplicasToBrokersRackUnaware(nPartitions: Int,
 replicationFactor: Int,
 brokerList: Seq[Int],
 fixedStartIndex: Int,
 startPartitionId: Int): 
Map[Int, Seq[Int]] = {
val ret = mutable.Map[Int, Seq[Int]]()
val brokerArray = brokerList.toArray
val startIndex = if (fixedStartIndex >= 0) fixedStartIndex else 
rand.nextInt(brokerArray.length)
var currentPartitionId = math.max(0, startPartitionId)
var nextReplicaShift = if (fixedStartIndex >= 0) fixedStartIndex else 
rand.nextInt(brokerArray.length)
for (_ <- 0 until nPartitions) {
  if (currentPartitionId > 0 && (currentPartitionId % brokerArray.length == 
0))
nextReplicaShift += 1
  val firstReplicaIndex = (currentPartitionId + startIndex) % 
brokerArray.length
  val replicaBuffer = mutable.ArrayBuffer(brokerArray(firstReplicaIndex))
  for (j <- 0 until replicationFactor - 1)
replicaBuffer += brokerArray(replicaIndex(firstReplicaIndex, 
nextReplicaShift, j, brokerArray.length))
  ret.put(currentPartitionId, replicaBuffer)
  currentPartitionId += 1
}
ret
  }




Re: Topic deletion

2017-04-06 Thread Sachin Mittal
Do you have delete.topic.enable=true uncommented or present in
server.properties

On Thu, Apr 6, 2017 at 11:03 PM, Adrian McCague 
wrote:

> Hi all,
>
> I am trying to understand topic deletion in kafka, there appears to be
> very little documentation or answers on how this works. Typically they just
> say to turn on the feature on the broker (in my case it is).
>
> I executed:
> Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic
>
> Running this again yields:
> Topic mytopic is already marked for deletion.
>
> --describe yields:
> Topic: mytopic  PartitionCount:6ReplicationFactor:3 Configs:
> retention.ms=0
> Topic: mytopic  Partition: 0Leader: -1  Replicas:
> 1006,1001,1005Isr:
> Topic  mytopic  Partition: 1Leader: -1  Replicas:
> 1001,1005,1003Isr:
>Topic: mytopic  Partition: 2Leader: -1  Replicas:
> 1005,1003,1004Isr:
> Topic: mytopic  Partition: 3Leader: -1  Replicas:
> 1003,1004,1007Isr:
> Topic: mytopic  Partition: 4Leader: -1  Replicas:
> 1004,1007,1006Isr:
> Topic: mytopic  Partition: 5Leader: -1  Replicas:
> 1007,1006,1001Isr:
>
> You can see that the deletion mark has meant that the Leader is -1.
> Also I read somewhere that retention needs to be set to something low to
> trigger the deletion, hence the config of retention.ms=0
>
> Consumers (or streams in my case) no longer see the topic:
> org.apache.kafka.streams.errors.TopologyBuilderException: Invalid
> topology building: stream-thread [StreamThread-1] Topic not found: mytopic
>
> And I can't create a new topic in its place:
> [2017-04-06 18:26:00,702] ERROR 
> org.apache.kafka.common.errors.TopicExistsException:
> Topic 'mytopic' already exists. (kafka.admin.TopicCommand$)
>
> I am a little lost as to where to go next, could someone explain how topic
> deletion is actually applied when a topic is 'marked' for deletion as that
> may help trigger it.
>
> Thanks!
> Adrian
>
>


NotLeaderForPartitionException in 0.8.x

2017-04-06 Thread Moiz Raja
Hi All,

I understand that we get a NotLeaderForPartitionException when the client tries 
to publish a message to a server which is not the Leader for a given partition. 
I also understand that once leadership does change the client will learn that 
and start publishing messages to the right server. I am however seeing that 
sometimes even after leadership has changed the client continues to send 
messages to the wrong server resulting in it continuously getting the 
NotLeaderForPartitionException. 

I am not sure how to reproduce this problem because we have only seen this 
sporadically in certain production deployments. When we get into this situation 
one step which usually works is to restart zookeeper, kafka and the client. 
Obviously this is not acceptable. Is there a workaround that I could implement 
which would help to remediate this situation. For example would re-creating the 
KafkaProducer help in this scenario?

Thanks,
-Moiz

Configuring consumer client to throw an exception when Kafka fails

2017-04-06 Thread Todd Nine
Hi all,
  We're trying to make our Kafka consumers fail fast, rather than run
forever.  We're using Kubernetes to run our containers, and we are taking a
fail fast approach, and letting K8s manage restarts.

What I want to happen

1) Start my consumer. This connects to Kafka
2) Kill my Kafka cluster to cause a connection failure
3) Kafka consumer fails on the call to .poll()

What actually happens

1) Start my consumer. This connects to Kafka
2) Kill my Kafka cluster to cause a connection failure
3) Kafka consumer just prints log messages, and no exception is ever thrown.

I can't seem to find how to modify this behavior in the consumer properties
documentation.  Is it possible, and if so, what settings do I tweak?

Thanks,
Todd


Topic deletion

2017-04-06 Thread Adrian McCague
Hi all,

I am trying to understand topic deletion in kafka, there appears to be very 
little documentation or answers on how this works. Typically they just say to 
turn on the feature on the broker (in my case it is).

I executed:
Kafka-topics.bat -delete -zookeeper keeperhere -topic mytopic

Running this again yields:
Topic mytopic is already marked for deletion.

--describe yields:
Topic: mytopic  PartitionCount:6ReplicationFactor:3 
Configs:retention.ms=0
Topic: mytopic  Partition: 0Leader: -1  Replicas: 
1006,1001,1005Isr:
Topic  mytopic  Partition: 1Leader: -1  Replicas: 
1001,1005,1003Isr:
   Topic: mytopic  Partition: 2Leader: -1  Replicas: 1005,1003,1004 
   Isr:
Topic: mytopic  Partition: 3Leader: -1  Replicas: 
1003,1004,1007Isr:
Topic: mytopic  Partition: 4Leader: -1  Replicas: 
1004,1007,1006Isr:
Topic: mytopic  Partition: 5Leader: -1  Replicas: 
1007,1006,1001Isr:

You can see that the deletion mark has meant that the Leader is -1.
Also I read somewhere that retention needs to be set to something low to 
trigger the deletion, hence the config of retention.ms=0

Consumers (or streams in my case) no longer see the topic:
org.apache.kafka.streams.errors.TopologyBuilderException: Invalid topology 
building: stream-thread [StreamThread-1] Topic not found: mytopic

And I can't create a new topic in its place:
[2017-04-06 18:26:00,702] ERROR 
org.apache.kafka.common.errors.TopicExistsException: Topic 'mytopic' already 
exists. (kafka.admin.TopicCommand$)

I am a little lost as to where to go next, could someone explain how topic 
deletion is actually applied when a topic is 'marked' for deletion as that may 
help trigger it.

Thanks!
Adrian



issues with leader election

2017-04-06 Thread Scoville, Tyler
HI,

I ran into an issue with kafka where the leader was set to -1 :

Topic:AP-100PartitionCount:5ReplicationFactor:3 
Configs:retention.ms=1579929985
Topic: AP-100  Partition: 0Leader: 26  Replicas: 26,24,25  
Isr: 24,26,25
Topic: AP-100   Partition: 1Leader: 22  Replicas: 22,25,26  
Isr: 22,26,25
Topic: AP-100   Partition: 2Leader: 23  Replicas: 23,26,22  
Isr: 26,22,23
Topic: AP-100   Partition: 3Leader: 24  Replicas: 24,22,23  
Isr: 24,22,23
Topic: AP-100   Partition: 4Leader: -1  Replicas: 25,23,24  
Isr: 24

What is the best way to fix this?

Thank you,

Tyler Scoville


Re: best practices to handle large messages

2017-04-06 Thread Vincent Dautremont
Hi,
you might be interested by this presentation
https://www.slideshare.net/JiangjieQin/handle-large-messages-in-apache-kafka-58692297

On Wed, Apr 5, 2017 at 1:27 AM, Mohammad Kargar  wrote:

> What are best practices to handle large messages (2.5 MB) in Kafka?
>
> Thanks,
> Mohammad
>

-- 
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.