Hi,
I have written a streams application to talk to topic on cluster of 5
brokers with 10 partitions. I have tried multiple combinations here like 10
application instances (on 10 different machines) with 1 stream thread each,
5 instances with 2 threads each. But for some reason, when I check
Neat, this would be super helpful! I submitted this ages ago:
https://issues.apache.org/jira/browse/KAFKA-
On Fri, Aug 31, 2018 at 5:04 AM, Satish Duggana
wrote:
> +including both dev and user mailing lists.
>
> Hi,
> Thanks for the KIP.
>
> "* For us, the message keys represent some
There is no push mechanism available for consumers in Kafka. What is the
current timeout passed to KafkaConsumer#poll(timeout)? You can increase
that timeout to avoid calling poll frequently.
Thanks,
Satish.
On Fri, Aug 31, 2018 at 12:27 AM, Pulkit Manchanda
wrote:
> HI All,
> I have a
+including both dev and user mailing lists.
Hi,
Thanks for the KIP.
"* For us, the message keys represent some metadata which we use to either
ignore messages (if a loop-back to the sender), or log some information.*"
Above statement was mentioned in the KIP about how key value is used. I
guess
Hi,
Thanks for the KIP.
"* For us, the message keys represent some metadata which we use to either
ignore messages (if a loop-back to the sender), or log some information.*"
Above statement was mentioned in the KIP about how key value is used. I
guess the topic is not configured to be compacted
HI All,
I have a consumer application continuously polling for the record in a
while loop wasting the CPU cycles.
Is there any alternative like I get a callback/event from Kafka sooner the
producer publishes the record to the topic.
Thanks
Pulkit
Hi Harsha,
thanks for reading the KIP.
The intent is to use the DefaultPartitioner logic for round-robin selection
of partition regardless of key type.
Implementing Partitioner interface isn’t the issue here, you would have to
do that anyway if you are implementing your own. But we also want
Hey Bill,
Thanks for reading the KIP, much appreciated.
The reason we want it to be a separate Partitioner is because:
a) We don’t want to specify partition anywhere.
b) we want to be able to reuse what’s been done for NULL key in
DefaultPartitioner.
Using the constructor means we need to
I included the following line before the other dependencies lines and it
worked:
libraryDependencies += "javax.ws.rs" % "javax.ws.rs-api" % "2.1" artifacts(
Artifact("javax.ws.rs-api", "jar", "jar"))
Thanks Guozhang
On Thu, Aug 30, 2018 at 12:01 PM Guozhang Wang wrote:
> I saw the library in
Hi,
I am using SSL for broker communications.
Now I want to run to connect to port 9093 with GetOffsetShell.
How can I have the class use the ssl certificates?
Are the any environment variables ?
Can I use a properties file?
Regards Hans
Hi,
NOTE: I sent this earlier, but that message just went to the dev list. I'm
including both users and dev now.
Thanks for the KIP.
Have you considered using the overloaded ProducerRecord constructor where
you can specify the partition? I mention this as an option as I
encountered the same
I saw the library in maven central:
https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api/2.1
Your maven repo seems also have this:
https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1/
Maybe it is your {packaging.type} variable is not defined?
Guozhang
On Thu, Aug 30,
Hi,
I get the following message when I try to use kafka-streams 2.0.0 (failing
to download javax.ws.rs-api) in sbt.
There is no problem when using kafka-streams 1.1.1.
I tried to use scala version 2,11 and 2.12 but I got the same results.
I am using WIndows 10.
SBT:
name := "KafkaProj1"
version
Hi,
Thanks for the KIP. I am trying to understand the intent of the KIP. Is
the use case you specified can't be achieved by implementing the Partitioner
interface here?
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
What is your poll time for consumers poll() method?
On Thu, 30 Aug 2018, 16:23 Cristian Petroaca,
wrote:
> Ok, so I mixed things up a little.
> I started with the kafka Server being configured to auto create topics.
> That gave the error.
> But turning the auto create off and creating the
Ok, so I mixed things up a little.
I started with the kafka Server being configured to auto create topics. That
gave the error.
But turning the auto create off and creating the topic with AdminUtils does not
show the error and the consumer actually polls for messages.
I did not modify the
The trouble is that the producer and consumer clients need to discover the
broker hostnames and address the individual brokers directly. There is an
advertised.listeners setting that will allow you to tell clients to connect
to external proxy hostnames instead of your internal ones, but those
Usually for such a use case you'd have a physical load balancer box (F5,
etc.) in front of Kafka that would handle the SSL termination, but it
should be possible with NGINX as well:
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
On Fri, 24 Aug 2018 at 18:35, Jack
Hello,
I opened a very simple KIP and there exists a JIRA for it.
I would be grateful if any comments are available for action.
Regards,
Hi,
I'm not sure if your broker and consumer work in different server?
May be you can try changing the broker's host.name and the consumer's
bootstrap.servers to the broker's really ip-address instead of "127.0.0.1"?
>-Original Message-
>From: Cristian Petroaca
Yes.
In my programmatic env I first create it with:
AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(),
RackAwareMode.Enforced$.MODULE$);
So partitions = 1 and replication = 1.
The same for the remote broker, I created the topic –partitions 1
–replication-factor 1
Are there any
Hi, I would like to try a setup of SASL/OAUTHBEARER where:
1. a kafka client obtains an OAUTHBEARER token from an authorization server
2. the kafka client send the OAUTHBEARER token to kafka
3. kafka validates the OAUTHBEARER token and authenticates the user
Is anybody aware if
22 matches
Mail list logo