producer refreshes the metadata for every metadata.max.age.ms (default
5min) to
discover new partitions.
On Sat, Mar 25, 2017 at 2:22 AM, Robert Quinlivan
wrote:
> Hello,
>
> I have added partitions to a topic. The new partitions appear in the
> consumer assignments and in
Hello,
I have added partitions to a topic. The new partitions appear in the
consumer assignments and in the topics listing they have the correct number
of ISRs. However, the producer still does not write to the new partitions.
My producer writes in a round-robin fashion, using the Cluster's
Hi Jon,
This is expected, see this:
https://groups.google.com/forum/?pli=1#!searchin/confluent-platform/migrated$20to$20another$20instance%7Csort:relevance/confluent-platform/LglWC_dZDKw/qsPuCRT_DQAJ
I've setup a KTable as follows:
KTable outTable = sourceStream.groupByKey().
reduce(rowReducer,
TimeWindows.of(5 * 60 * 1000L).advanceBy(1 * 60 *
1000).until(10 * 60 * 1000L),
"AggStore");
I can confirm its presence via 'streams.allMetadata()'
lol...well, I take it all back. Now I can't get it to work at all :(
Here's what I have:
*consumer.properties*
zookeeper.connect=[server_list]
# I was changing the group.id each time in case that was causing some issues
group.id=MirrorMakerTest7
client.id=MirrorMakerConsumer
Hi,
Kafka has partitions and akka can do parallel processing.
I have one perfect use-case where I have to read data in parallel.
But seems like partitions does not give me any extra info other than
partition number and how do i make sure that data_x should always go to
x-partition next time and
Hi Karthik,
I think in the current trunk we do effectively load balance across
processes (they are named as "clients" in the partition assignor) already.
More specifically:
1. Consumer clients embedded a "client UUID" in its subscription so that
the leader can group them into a single client,
You make some great cases for your architecture. To be clear - Ive been
proselytizing for kafka since I joined this company last year. I think my
largest issue is rethinking some preexisting notions about streaming to
make them work in the kstream universe.
On Fri, Mar 24, 2017 at 6:07 AM,
Again, thank you for the feedback. That link was very helpful!
I adjusted my consumer/producer configs to be:
consumer:
zookeeper.connect=[server_list_here]
group.id=MirrorMaker
exclude.internal.topics=true
client.id=MirrorMakerConsumer
producer:
metadata.broker.list=[server_list_here]
producer distributes the non-keyed messages to available target partitions
in a
round-robin fashion.
You don't need to set num.consumer.fetchers, partition.assignment.strategy
props.
Use the --num.streams option to specify the number of consumer threads to
create.
Thanks very much for the reply Manikumar!
I found that there were a few topics on the source cluster that had more
than two partitions, but all topics on the target cluster had 2
partitions. I did a test between one topic that had 2 on each, and I did
get messages to both partitions as expected.
Cool - thanks for clarifying this!
On Thu, Mar 23, 2017 at 10:54 AM, Ismael Juma wrote:
> Hi Kostas,
>
> Yes, equal is fine. The code that prints an error if replication fails due
> to this:
>
> error(s"Replication is failing due to a message that is greater than
>
> If I understand this correctly: assuming I have a simple aggregator
> distributed across n-docker instances each instance will _also_ need to
> support some sort of communications process for allowing access to its
> statestore (last param from KStream.groupby.aggregate).
Yes.
See
Hi,
I am passing key in producer but still no change in partition.
I can see in producer response key value but no change in partition.
*This is how my props looks:*
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
If I understand this correctly: assuming I have a simple aggregator
distributed across n-docker instances each instance will _also_ need to
support some sort of communications process for allowing access to its
statestore (last param from KStream.groupby.aggregate).
How would one go about
If I run 3 brokers in a cluster on localhost the cpu usage is virtually
zero. Not sure why on other environments the minimum usage of each broker
is at least 13% (with zero producers/consumers), that doesn't sound normal.
On Thu, Mar 23, 2017 at 4:48 PM, Paul van der Linden
>> It wants to extract Avro schema from ORC record.
Should say: It wants to extract connect schema from ORC record.
On Thu, Mar 23, 2017 at 11:14 PM, Manoj Murumkar
wrote:
> Hi,
>
> I am developing a connector to support ORC data type in HDFS connector.
> Everything
17 matches
Mail list logo