Hi Guozhang,
I am using 10.2.1.
-Sameer.
On Sat, Jul 29, 2017 at 12:05 AM, Guozhang Wang wrote:
> Sameer,
>
> This bug should be already fixed in trunk.
>
> Which version of Kafka Streams are you running with? We can consider
> backport it and have a bug-fix release if it turns out to be a com
Environment
CDH 5.7.
Kafka 0.9 ( Cloudera)
Our broker ( Cloudera manager) is warning us about open file descriptors on
the cluster. It has around 17K file descriptors open. There is a
configuration in Cloudera manager to change threshold for warning and
critical number of file descriptors open at
Damien,
Here is a public gist:
https://gist.github.com/ctippur/9f0900b1719793d0c67f5bb143d16ec8
- Shekar
On Fri, Jul 28, 2017 at 11:45 AM, Damian Guy wrote:
> It might be easier if you make a github gist with your code. It is quite
> difficult to see what is happening in an email.
>
> Cheers,
It might be easier if you make a github gist with your code. It is quite
difficult to see what is happening in an email.
Cheers,
Damian
On Fri, 28 Jul 2017 at 19:22, Shekar Tippur wrote:
> Thanks a lot Damien.
> I am able to get to see if the join worked (using foreach). I tried to add
> the log
Sameer,
This bug should be already fixed in trunk.
Which version of Kafka Streams are you running with? We can consider
backport it and have a bug-fix release if it turns out to be a common issue.
Guozhang
On Fri, Jul 28, 2017 at 4:57 AM, Damian Guy wrote:
> It is due to a bug. You should s
Thanks a lot Damien.
I am able to get to see if the join worked (using foreach). I tried to add
the logic to query the store after starting the streams:
Looks like the code is not getting there. Here is the modified code:
KafkaStreams streams = new KafkaStreams(builder, props);
streams.start();
Hi Gabriel,
I have yet to experiment with enabling SSL for Kafka.
However, there are some good documents out there that seem to cover it.
Examples:
*
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
*
http://coheigea.blogspot.com/2016/09/securing-apac
Hey guys,
We're trying to use the Java Kafka client but it turns out it's not socks
proxy aware - the connect uses a SocketChannel that does not work with
proxies -
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/Selector.java
Any ideas how to use
Hi Vahid,
Do you know how to use consumer-group tool with ssl only (without sasl) ?
Gabriel.
Le 24 juil. 2017 11:15 PM, "Vahid S Hashemian"
a écrit :
Hi Meghana,
I did some experiments with SASL_PLAINTEXT and documented the results
here:
https://developer.ibm.com/opentech/2017/05/31/kafka-ac
Hmmm, i'm not sure that is going to work as both nodes will have the same
setting for StreamsConfig.APPLICATION_SERVER_PORT, i.e, 0.0.0.0:7070
On Fri, 28 Jul 2017 at 16:02 Debasish Ghosh
wrote:
> The log file is a huge one. I can send it to you though. Before that let
> me confirm one point ..
>
The log file is a huge one. I can send it to you though. Before that let me
confirm one point ..
I set the APPLICATION_SERVER_CONFIG to
s"${config.httpInterface}:${config.httpPort}". In my case the httpInterface
is "0.0.0.0" and the port is set to 7070. Since the two instances start on
different n
Do you have any logs that might help to work out what is going wrong?
On Fri, 28 Jul 2017 at 14:16 Damian Guy wrote:
> The config looks ok to me
>
> On Fri, 28 Jul 2017 at 13:24 Debasish Ghosh
> wrote:
>
>> I am setting APPLICATION_SERVER_CONFIG, which is possibly what u r
>> referring to. Just
Thanks, Vahid ! Nice documentation. All the tools were working fine except
for the kafka-consumer-groups --list which is what I was struggling to get
working. Realized I had missed the cluster permissions for the user. It
looks good now.
Thanks,
Meghana
On Mon, Jul 24, 2017 at 5:14 PM, Vahid S Ha
The config looks ok to me
On Fri, 28 Jul 2017 at 13:24 Debasish Ghosh
wrote:
> I am setting APPLICATION_SERVER_CONFIG, which is possibly what u r
> referring to. Just now I noticed that I may also need to set
> REPLICATION_FACTOR_CONFIG, which needs to be set to 2 (default is 1).
> Anything else
Hello All,
Is their a way to generate a Sink File with Header using Kafka Connect?
Thanks and Regards
MR
I am setting APPLICATION_SERVER_CONFIG, which is possibly what u r
referring to. Just now I noticed that I may also need to set
REPLICATION_FACTOR_CONFIG, which needs to be set to 2 (default is 1).
Anything else that I may be missing ?
regards.
On Fri, Jul 28, 2017 at 5:46 PM, Debasish Ghosh
wr
Hi Damien -
I am not sure I understand what u mean .. I have the following set in the
application .. Do I need to set anything else at the host level ?
Environment variable ?
val streamingConfig = {
val settings = new Properties
settings.put(StreamsConfig.APPLICATION_ID_CONFIG,
"k
Hi,
Do you have the application.server property set appropriately for both
hosts?
The second stack trace is this bug:
https://issues.apache.org/jira/browse/KAFKA-5556
On Fri, 28 Jul 2017 at 12:55 Debasish Ghosh
wrote:
> Hi -
>
> In my Kafka Streams application, I have a state store resulting f
It is due to a bug. You should set
StreamsConfig.STATE_DIR_CLEANUP_DELAY_MS_CONFIG to Long.MAX_VALUE - i.e.,
disabling it.
On Fri, 28 Jul 2017 at 10:38 Sameer Kumar wrote:
> Hi,
>
> I am facing this error, no clue why this occurred. No other exception in
> stacktrace was found.
>
> Only thing di
Hi -
In my Kafka Streams application, I have a state store resulting from a
stateful streaming topology. The environment is
- Kafka 0.10.2.1
- It runs on a DC/OS cluster
- I am running Confluent-Kafka 3.2.2 on the cluster
- Each topic that I have has 2 partitions with replication fact
Hello,
we would like to use Kafka as a way to inform users about events of certain
topics. For this purpose, we want to develop Windows and Mac clients which
users would install on their desktop PCs.
We got a broad number of users, so it's likely that there will be >10.000
clients running i
Hi,
I am facing this error, no clue why this occurred. No other exception in
stacktrace was found.
Only thing different I did was I ran kafka streams jar on machine2 a couple
of mins after i ran it on machine1.
Please search for this string in the log below:-
org.apache.kafka.streams.processor.i
Hello Apache Kafka community,
In Consumer, Producer, AdminClient and Broker configuration documentation
there's a common config property, request.timeout.ms, with common
description part being:
"The configuration controls the maximum amount of time the client will wait
for the response of a reques
Hi,
The store won't be queryable until after you have called streams.start().
No stores have been created until the application is up and running and
they are dependent on the underlying partitions.
To check that a stateful operation has produced a result you would normally
add another operation a
You can do both in a single application via
KStream input = builder.stream("topic");
input.to("output-1");
input.to("output-2");
In general, if you reuse a KStream or KTable and apply multiple
operators (in the example about, two `to()` operators), the input will
be duplicated and sent to each op
25 matches
Mail list logo