Hello,
We have the following piece of code where we read lines from a file and push
them to a Kafka topic :
Properties properties = new Properties();
properties.put("bootstrap.servers", );
properties.put("key.serializer",
StringSerializer.class.getCanonicalName());
Hi,
What I find is that when I use sorted set as aggregation it fails to
aggregate the values which have compareTo returning 0.
My class is like this:
public class Message implements Comparable {
public long ts;
public String message;
public Message() {};
> On Nov 21, 2016, at 6:41 PM, Janagan Sivagnanasundaram
> wrote:
>
> Hi,
>
> Is there any libraries or any other way to implement top-k quality
> extraction algorithm (top-k info per window/batch) to the existing Kafka,
> which enable the consumer to get the top-k
Hi, all. I use the kafka which the version is 0.8.2.1. When partiton
transferring, KafkaProducer needs about 5mins to recovery, so do you know how
to fix it.
Thanks!!!
Hi, Mark,
So you did the manual leader election after the cluster is stabilized (i.e,
all replicas are in sync)? Then, it may not be related to the issue that I
described.
If there is just a single leadership change, what you described shouldn't
happen by design. I modified your steps to the
Jun,
Yeah, I probably have an off-by-one issue in the HW description. I
think you could pick any number here and the problem remains -- could
you read through the steps I posted and see if they logically make
sense, numbers aside?
We definitely lost data in 4 partitions of the 8,000 that were
Hi, Mark,
Hmm, the committing of a message at offset X is the equivalent of saying
that the HW is at offset X + 1. So, in your example, if the producer
publishes a new message at offset 37, this message won't be committed
(i.e., HW moves to offset 38) until the leader sees the follower fetch from
Jun,
Thanks for the reply!
I am aware the HW won't advance until the in-sync replicas have
_requested_ the messages. However, I believe the issue is that the
leader has no guarantee the replicas have _received_ the fetch response.
There is no second-phase to the commit.
So, in the particular
Hi Brian,
It sounds like you might want do something like:
KTable inputOne = builder.table("input-one");
KTable inputTwo = builder.table("input-two");
KTable inputThree = builder.table("input-three");
ValueJoiner joiner1 = //...
ValueJoiner joiner2 = //...
inputOne.join(inputTwo, joiner1)
Hey guys. I've been banging my head for about 3 days now trying to get a
streams application working with no luck. I've read through all of the
docs and examples I can find, and just am not getting it. I'm using
0.10.1 and have worked quite a bit with the high-level consumer and
publisher.
Hi,
We recently upgraded to 0.9.0.1, and pretty soon thereafter crashed
headlong into https://issues.apache.org/jira/browse/KAFKA-3594, which
involves crashing and possible data loss/duplication for us. Any
chance of getting a 0.9.0.2 happening soon?
Thanks!
ben osheroff
zendesk.com
One more thing to add:
You could build an own tool to set specific offsets for each partition.
The mentioned reset tool does something similar, setting the offset for
all partitions to zero. But you could just copy the code and enhance it
to set arbitrary offsets for each partition.
However, for
It's self serve: http://kafka.apache.org/contact
On Mon, Nov 21, 2016 at 1:20 AM, marcel bichon wrote:
> request of addition to the mailing list
>
Ignore the comment about lookups. Your client is finding mybalancer01 since
it was working earlier and kafka doesn't need to lookup mybalancer01. It
will be good to check the jaas config and then run with debug logging.
On Mon, Nov 21, 2016 at 5:16 PM, Rajini Sivaram <
Not sure if this is the exact jaas.conf that you have, because this has
mismatched passwords:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="someuser"
user_kafka="somePassword"
password="kafka-password";
};
To use username "some user",
Thanks again. So this might be very telling of the underlying problem:
I did what you suggested:
1) I disabled (actually deleted) the first rule; then
2) I changed the load balancer's second (which is now its only) rule to accept
TCP:9093 and to translate that to TCP:9093, making the
Rule #1 and Rule #2 cannot co-exist. You are basically configuring your LB
to point to a Kafka broker and you are pointing each Kafka broker to point
to a LB. So you need a pair of ports with a security protocol for the
connection to work. With two rules, Kafka picks up the wrong LB port for
one
In the last email I should have mentioned: don't pay too much attention to the
code snippet, and after reviewing it, I can see it actually incomplete (I
forgot to include the section where I configure the topics and broker configs
to talk to Kafka!).
What I'm really concerned about is that
request of addition to the mailing list
Thanks Rajini,
So I have implemented your solution (the the best of my knowledge) 100% as you
have specified.
On the load balancer (AWS ELB) I have two load balancer rules:
Rule #1:
Incoming: SSL:9093
Outgoing (to Kafka): TCP:9093
Rule #2:
Incoming: TCP:9095
Outgoing (toKafka): TCP:
A load balancer that balances the load across the brokers wouldn't work,
but here the LB is being used as a terminating SSL proxy. That should work
if each Kafka is configured with its own proxy.
On Mon, Nov 21, 2016 at 2:57 PM, tao xiao wrote:
> I doubt the LB solution
Zac,
Yes, that is correct.
With the configuration:
listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092
- Clients talk to port 9093 on load balancer using SSL.
- Load balancer
Also: Since your testing is purely local, feel free to share the code you
have been using so that we can try to reproduce what you're observing.
-Michael
On Mon, Nov 21, 2016 at 4:04 PM, Michael Noll wrote:
> Please don't take this comment the wrong way, but have you
I doubt the LB solution will work for Kafka. Client needs to connect to the
leader of a partition to produce/consume messages. If we put a LB in front
of all brokers which means all brokers share the same LB how does the LB
figure out the leader?
On Mon, Nov 21, 2016 at 10:26 PM Martin Gainty
Please don't take this comment the wrong way, but have you double-checked
whether your counting code is working correctly? (I'm not implying this
could be the only reason for what you're observing.)
-Michael
On Fri, Nov 18, 2016 at 4:52 PM, Eno Thereska
wrote:
> Hi
On Mon, Nov 21, 2016 at 1:06 PM, Sachin Mittal wrote:
> I am using kafka_2.10-0.10.0.1.
> Say I am having a window of 60 minutes advanced by 15 minutes.
> If the stream app using timestamp extractor puts the message in one or more
> bucket(s), it will get aggregated in those
From: Zac Harvey
Sent: Monday, November 21, 2016 8:59 AM
To: users@kafka.apache.org
Subject: Re: Can Kafka/SSL be terminated at a load balancer?
Thanks again Rajini,
Using these configs, would clients connect to the load balancer
Thanks again Rajini,
Using these configs, would clients connect to the load balancer over SSL/9093?
And then would I configure the load balancer to forward traffic from SSL/9093
to plaintext/9093?
Thanks again, just still a little uncertain about the traffic/ports coming into
the load
Zac,
Yes, that is correct. Ruby clients will not be authenticated by Kafka. They
talk SSL to the load balancer and the load balancer uses PLAINTEXT without
authentication to talk to Kafka.
On Mon, Nov 21, 2016 at 1:29 PM, Zac Harvey wrote:
> *Awesome* explanation Rajini
*Awesome* explanation Rajini - thank you!
Just to confirm: the SASL/PLAIN configs would only be for the interbroker
communication, correct? Meaning, beyond your recommended changes to
server.properties, and the addition of the new jaas.conf file, the producers
(Ruby clients) wouldn't need to
Hi,
I just want to raise a flag concerning an error in the documentation.
it says :
> *fetch.max.wait.ms*
> The maximum amount of time the server will block before answering the
> fetch request if there isn't sufficient data to immediately satisfy the
> requirement given by fetch.min.bytes.
So in my java code how can I check
when there is no initial offset in Kafka or if the current offset does not
exist any more on the server (e.g. because that data has been deleted)
So in this case as you have said I can set offset as
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
I am using kafka_2.10-0.10.0.1.
Say I am having a window of 60 minutes advanced by 15 minutes.
If the stream app using timestamp extractor puts the message in one or more
bucket(s), it will get aggregated in those buckets.
I assume this statement is correct.
Also say when I restart the streams
Hello !
I have a three brokers (K1, K2, K3) cluster using Kafka 0.8.2 and a
zookeeper cluster (colocalized with kafka brokers).
I have a topic with one partition and a replication factor of 3.
I have a producer publishing messages in the topic every minuts (1+ message)
I have a consumergroup
Hi Sachin,
Currently you can only change the following global configuration by using
"earliest" or "latest", as shown in the code snippet below. As the Javadoc
mentions: "What to do when there is no initial offset in Kafka or if the
current offset does not exist any more on the server (e.g.
Hi
I am running a streaming application with
streamsProps.put(StreamsConfig.APPLICATION_ID_CONFIG, "test");
How do I find out the offsets for each of the source, intermediate and
internal topics associated with this application.
And how can I reset them to some specific value via shell of
Hello !
I contact you because I'm have a problem with my architecture using kafka.
I'm using Confluent 1.0 with kafka 0.8.2.
I have the following architecture :
- A job who calls every hour the kafka REST API behind a load-balancer.
- The Kafka REST API servers points on a kafka cluster of three
Zac,
*advertised.listeners* is used to make client connections from
producers/consumers as well as for client-side connections for inter-broker
communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
would work for inter-broker, bypassing the load balancer, but clients would
also
38 matches
Mail list logo