That sounds like a good suggestion. I'm actually looking at the code and I
will start another thread for questions about that.
On Tue, Nov 17, 2015 at 5:42 PM, Jason Gustafson wrote:
> Thanks for the explanation. Certainly you'd use less connections with this
> approach, but
Hey guys,
I saw the PartitionAssignor is not in public doc API and the package name
is internals.
Does it mean this API is not stable and could be changed even in minor
release?
And in the assign method signature, the key for the "subscription" map is
memberId, what is memberId, can I manually
Hi,
I'm tracking the 0.9.0.0 Git tag and have a Java consumer using the new
API, but I'm seeing some strange issues. I run ZooKeeper and Kafka on my
own machine using the settings files in config/ and no authentication.
Build is done using Oracle JDK 8. I have 13 topics, each created with a
This was a brand new cluster, so 0 topics. Every broker had the same issue
and it was all communication with itself. In any case - i deployed a later
cut and it started working.
Cheers,
Damian
On 18 November 2015 at 02:15, Jun Rao wrote:
> There is inter-broker
Hi Martin,
Thanks for reporting this problem. I think maybe we're just not doing a
very good job of handling auto-commit errors internally and they end up
spilling into user logs. I added a JIRA to address this issue:
https://issues.apache.org/jira/browse/KAFKA-2860.
-Jason
On Wed, Nov 18, 2015
Hello Martin,
Could you paste the consumer config values in this thread as well? And is
the consumer co-located with the broker?
Guozhang
On Wed, Nov 18, 2015 at 7:40 AM, Martin Skøtt <
martin.sko...@falconsocial.com> wrote:
> Hi,
>
> I'm tracking the 0.9.0.0 Git tag and have a Java consumer
Currently the whole KafkaConsumer interface is tagged as "
@InterfaceStability.Unstable", meaning that the API may change in the
future. We have been very careful to make any dramatic public API changes
but still cannot guarantee this will not happen.
Member-Id is assigned by the server-side
It turns out that "auto.create.topics.enable=true" was actually getting
overridden to false somewhere else, and ended up causing this issue.
On Tue, Nov 3, 2015 at 8:19 PM, Artem Ervits wrote:
> change the order of your commands
>
> *bin/kafka-console-producer.sh
Hi folks,
I've been chasing an issue for a bit now without much luck. We're seeing
occasional (1-2 times a day) pause times of 10+ seconds in a 0.8.2.0 broker
only handling ~3k messages/s. We're only seeing it on one node at a time in
a three node cluster, though which node is affected can change
Hi, all:
I like to update some information about Avro message on Kafka. Avro
message include schema ID instead of Schema at each message
http://stackoverflow.com/questions/31204201/apache-kafka-with-avro-and-schema-repo-where-in-the-message-does-the-schema-id
sincerely,
Selina
On Wed, Nov
In the new API, the explicit commit offset method call only works for
subscribe consumer, not the assign consumer, correct?
Best,
Siyuan
It is used to carry data metadata that leader wants to propagate to other
members while doing the rebalance. For example, in Kafka Stream userData
contains the mapping of stream tasks to partition groups; in Kafka
Connector different connectors can also use this field to fill in
app-specific
I want change the partition assignment to spread the partitions across two
machines, since machine #1 is getting full on disk space.
I have kafka manager to make this easy. Is there any downtime to
re-assigning partitions? I assume kafka builds up the new partitions and
then does a hit-less
Thanks Guozhang, what is userData for in the Subscription?
On Wed, Nov 18, 2015 at 12:05 PM, Guozhang Wang wrote:
> Currently the whole KafkaConsumer interface is tagged as "
> @InterfaceStability.Unstable", meaning that the API may change in the
> future. We have been very
Hi Guozhang,
The consumer, broker, and zookeeper are all on the same machine - just
testing out Kafka at the moment :)
Here are the configuration values I set:
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "xxx-yyy-reader");
Dear All:
I need to generate some data by Samza to Kafka and then write to
Parquet formate file. I was asked why I choose Avro type as my Samza
output to Kafka instead of Protocol Buffer. Since currently our data on
Kafka are all Protocol buffer type message.
I explained that Avro
Hi,
How about this feature? thanks
*We do plan to allow the high level consumer to specify a
starting offset inthe future when we revisit the consumer design. Some of
the details aredescribed
inhttps://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
17 matches
Mail list logo