I checked my target topic and I see few messages than the source topic. (If
source topic have 5 messages, I see 2 messages in my target topic) What
settings I need to do ?
And, when I try to consume message from the target topic, I get ClassCast
Exception.
java.lang.ClassCastException:
Hi Andrew,
Thank you for your reply. I did some experiments on the setting of ack ==
-1 and had a few seconds of downtime, but it's way better than losing
messages so i will go with that. Thanks again for your help.
-- Justin
On Mon, Oct 10, 2016 at 6:49 AM, Andrew Grasso
Hi all;
I have custom datatype defined (a pojo class).
I copy messages from one topic to another topic.
I do not see any messages in my target topic.
This works fro string messages, but not for my custom message.
Waht might be the cause?
I followed this sample [1]
[1]
I checked my targetTopic for available messages, but it says '0' . What
might cause issue here to merge two topics with custom type messages ?
On 11 October 2016 at 14:44, Ratha v wrote:
> Thanks, this demo works perfectly for string messages.
>
> I have custom
Thanks, this demo works perfectly for string messages.
I have custom messageType defined( a java pojo class). And i have SerDe
implemented for that.
Now after merging sourceTopic-->Target Topic,
I could not consume the messages..Means, Consumer does not return any
messages.
What might be the
The documentation is mostly fixed now: http://kafka.apache.org/
0101/documentation.html. Thanks to Derrick Or for all the help. Let me know
if anyone notices any additional problems.
-Jason
On Mon, Oct 10, 2016 at 1:10 PM, Jason Gustafson wrote:
> Hello Kafka users,
With that link I came across the producer-perf-test tool, quite useful as
it gets rid of the Go (Sarama) variable. Since it can quickly tweak
settings, it's extremely useful.
As you suggested Eno, I attempted to copy the LinkedIn settings. With 100
byte records, I get up to about 600,000
Hello Kafka users, developers and client-developers,
This is the second candidate for release of Apache Kafka 0.10.1.0. This is
a minor release that includes great new features including throttled
replication, secure quotas, time-based log searching, and queryable state
for Kafka Streams. A full
Hi,
I have a custom Kafka consumer which reads messages from a topic, hands over
the processing of the messages to a different thread, and while the messages
are being processed, it pauses the topic and keeps polling the Kafka topic (to
maintain heartbeats) and also commits offsets using
Sure, good ideas. I'll try multiple producers, localhost and LAN, to see if
any difference
Yep, Gwen, the Sarama client. Anything to worry about there outside of
setting the producer configs (which would you set?) and number of buffered
channels? (currently, buffered channels up to 10k).
Thanks!
Just note that in general doing what Todd advice is pretty risky.
We've seen controllers get into all kinds of weird situations when
topics were deleted from ZK directly (including getting stuck in an
infinite loop, deleting unrelated topics and all kinds of strangeness)
- we have no tests for
Out of curiosity - what is "Golang's Kafka interface"? Are you
referring to Sarama client?
On Sun, Oct 9, 2016 at 9:28 AM, Christopher Stelly wrote:
> Hello,
>
> The last thread available regarding 10GBe is about 2 years old, with no
> obvious recommendations on tuning.
>
>
Hello David,
Your observation is correct, the stream time reasoning is dependent on the
buffered records from each of the input topic-partitions, and hence is
"data-driven".
Currently to get around this I'd recommend letting the producer to send
certain "marker" messages periodically to ensure
Hi Chris,
I think the first step would be to set up the system so we can easily identify
the bottlenecks. With your setup I'm currently worried about 2 things:
1. the system is not being driven hard enough. In particular 1 producer might
not be enough. I'd recommend running 3 producer
> On Oct 6, 2016, at 09:59, Ismael Juma wrote:
>
> On Thu, Oct 6, 2016 at 2:51 PM, Avi Flax wrote:
>>
>> Does this mean that the next release (after 0.10.1.0, maybe ~Feb?) might
>> remove altogether the requirement that Streams apps be able to
Publisher/Subscriber systems can be divided into two categories.
1) Topic based model
2) Content based model - Provide accurate results compared to topic based
model, since subscribers interested on the content of the message rather
than subscribing to a topic and getting all the messages.
Kafka
Hi ,
I am using kafka 0.9.0.x and in a multithreaded system I have created only
one instance of Kafka producer.
When I say producer.close(); it only closes communication and cannot send
the messages on topic.
BUT why its object is still valid if we cannot send requests to it.
Moreover, I
Thx for the responses. I was able to identify a bug in how the times are
obtained (offsets resolved as unknown cause the issue):
“Actually, I think the bug is more subtle. What happens when a consumed topic
stops receiving messages? The smallest timestamp will always be the static
timestamp
Hi,
We are doing some testing and need to frequently wipe out the kafka logs
and delete the topic completely. There is no problem starting/stopping
zookeeper and server multiple times.
So what is the best way of purging a topic and removing its reference from
zookeeper entirely.
I can physically
Hi Justin,
Setting the required acks to -1 does not require that all assigned brokers
are available, only that all members of the ISR are available. If a broker
goes down, the producer is able to commit messages once the faulty broker
is evicted from the ISR. This can continue even if only one
Hi Sachin,
Yes it will be called each time a key is modified, it will do this continuously
until you stop the app.
Eno
> On 9 Oct 2016, at 16:50, Sachin Mittal wrote:
>
> Hi,
> It is actually a KTable-KTable join.
>
> I have a stream (K1, A) which is aggregated as (Key,
Hi Syed,
Apache Kafka runs on a JVM. I think the question you should ask is -- which
JVM does Apache Kafka require in production*? It doesn't really depend on
anything on a specific Linux distribution.
* ...and I don't have that answer ;-)
Cheers,
Jens
On Wednesday, October 5, 2016, Syed
Hi Guozhang,
At first, thank you answer my question, and give me some suggest.But, I'm sure
I readed some introduction about kafka.
In my producer, My Code is( c code):res = rd_kafka_conf_set(kafka_conf,
"request.required.acks", "-1", NULL, 0);res = rd_kafka_topic_conf_set(
topic_conf,
Aris,
I am not aware of an out of the box tool for Pcap->Kafka ingestion (in my
case back then we wrote our own). Maybe others know.
On Monday, October 10, 2016, Aris Risdianto wrote:
> Thank you for answer Michael.
>
> Actually, I have made a simple producer from Pcap to
Aris,
even today you can already use Kafka to deliver Netflow/Pcap/etc. messages,
and people are already using it for that (I did that in previous projects
of mine, too).
Simply encode your Pcap/... messages appropriately (I'd recommend to take a
look at Avro, which allows you to structure your
Hello,
Is there any plan or implementation to use Kafka for delivering
sFlow/NetFlow/Pcap messages?
Best Regards,
Aris.
Elias,
yes, that is correct.
I also want to explain why:
One can always convert a KTable to a KStream (note: this is the opposite
direction of what you want to do) because one only needs to iterate through
the table to generate the stream.
To convert a KStream into a KTable (what you want to
Depends on which partitioner you are using, see [1] and [2]. From what I
understand the `NewHashPartitioner` comes closest to the behavior of Kafka
Java producer, but instead of going round-robin for null-keyed messages it
picks a partition at random.
[1]
It seems to be using a Hash Partitioner here:
https://github.com/Shopify/sarama/blob/master/config.go#L262
and HashPartitioner is documented as:
> If the message's key is nil then a random partition is chosen
https://godoc.org/github.com/Shopify/sarama#example-Partitioner--Random
So.. it
> We have run the application (and have confirmed data is being received)
for over 30 mins…with a 60-second timer.
Ok, so your app does receive data but punctuate() still isn't being called.
:-(
> So, do we need to just rebuild our cluster with bigger machines?
That's worth trying out. See
Check this example
https://github.com/apache/kafka/blob/trunk/streams/examples/src/main/java/org/apache/kafka/streams/examples/pipe/PipeDemo.java
On Mon, Oct 10, 2016 at 11:34 AM, Ratha v wrote:
> Hi Sachin;
> I went through the KStream/KTable Documentation. My scenario
FYI: Kafka's new Java producer (which ships with Kafka) the behavior is as
follows: If no partition is explicitly specified (to send the message to)
AND the key is null, then the DefaultPartitioner [1] will assign messages
to topic partitions in a round-robin fashion. See the javadoc and also
Hi Sachin;
I went through the KStream/KTable Documentation. My scenario is very
simple..I want to merge two topics( ie: Send messages available in the
topic A -->topic B , in my case i'll be having only single message in that
topicA)
Do I need Stateful processing (KStream)?
Thanks.
On 10
33 matches
Mail list logo