Hi Gwen,
this is not a hint as in "make it smarter" this is a hint as to "make it
work" wich should not require hinting.
Best Jan
On 27.01.2017 22:35, Gwen Shapira wrote:
Another vote in favor of overloading. I think the streams API actually
trains users quite well in realizing the implic
Hi Exactly
I know it works from the Processor API, but my suggestion would prevent
DSL users dealing with storenames what so ever.
In general I am pro switching between DSL and Processor API easily. (In
my Stream applications I do this a lot with reflection and instanciating
KTableImpl) Conc
hello all,
i just wanted to point out a potential issue in kafka-clients 0.10.1.1
i was using spark-sql-kafka-0-10, which is spark structured streaming
integration for kafka. it depends on kafka-clients 0.10.0.1 but since my
kafka servers are 0.10.1.1 i decided to upgrade kafka-clients to 0.10.1.
Another vote in favor of overloading. I think the streams API actually
trains users quite well in realizing the implications of adding a
state-store - we need to figure out the correct Serde every single
time :)
Another option: "materialize" behaves almost as a SQL hint - i.e.
allows a user to con
Jan,
the IQ feature is not limited to Streams DSL but can also be used for
Stores used in PAPI. Thus, we need a mechanism that does work for PAPI
and DSL.
Nevertheless I see your point and I think we could provide a better API
for KTable stores including the discovery of remote shards of the same
Yeah,
Maybe my bad that I refuse to look into IQ as i don't find them anywhere
close to being interesting. The Problem IMO is that people need to know
the Store name), so we are working on different levels to achieve a
single goal.
What is your peoples opinion on having a method on KTABLE th
Thanks, Todd! Deleting the /controller znode worked.
On Fri, Jan 27, 2017 at 10:20 AM, Todd Palino wrote:
> Did you move the controller (by deleting the /controller znode) after
> removing the reassign_partitions znode? If not, the controller is probably
> still trying to do that move, and is n
I am looking to product ionize and deploy my Kafka Connect application.
However, there are two questions I have about the tasks.max setting which is
required and of high importance but details are vague for what to actually set
this value to.
My simplest question then is as follows: If I have a
Did you move the controller (by deleting the /controller znode) after
removing the reassign_partitions znode? If not, the controller is probably
still trying to do that move, and is not going to accept a new move request.
On Fri, Jan 27, 2017 at 10:16 AM, Tom Raney
wrote:
> After adding a new Ka
I am using kafka_2.10-0.8.1 and trying to fetch messages using Simple
Consumer API. I notice that byte array for message retrieved has 26 junk
bytes appended at the beginning of original message sent by producer. Any
idea what's going on here? This works fine with High level consumer.
This is how
After adding a new Kafka node, I ran the kafka-reassign-partitions.sh tool
to redistribute topics onto the new machine and it seemed like some of the
migrations were stuck processing for over 24 hours, so I cancelled the
reassignment by deleting the zk node (/admin/reassign_partitions) and used
the
I think Jan is saying that they don't always need to be materialized, i.e.,
filter just needs to apply the ValueGetter, it doesn't need yet another
physical state store.
On Fri, 27 Jan 2017 at 15:49 Michael Noll wrote:
> Like Damian, and for the same reasons, I am more in favor of overloading
>
Like Damian, and for the same reasons, I am more in favor of overloading
methods rather than introducing `materialize()`.
FWIW, we already have a similar API setup for e.g.
`KTable#through(topicName, stateStoreName)`.
A related but slightly different question is what e.g. Jan Filipiak
mentioned ea
Hi,
Yeah its confusing, Why shoudn't it be querable by IQ? If you uses the
ValueGetter of Filter it will apply the filter and should be completely
transparent as to if another processor or IQ is accessing it? How can
this new method help?
I cannot see the reason for the additional materializ
If you are using jmxterm then you are going to connect to a running jvm and
you don't need to set StreamsConfig.METRICS_REPORTER_CLASSES_CONFIG. You
need to connect jmxterm to the MBean server that will be running in the jvm
of your streams app. You'll need to provide an appropriate jmx port for it
Found somewhat related ticket https://issues.apache.org/
jira/browse/KAFKA-4385
On Fri, Jan 27, 2017 at 1:09 PM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> When using new async KafkaProducer, one can register callback with send
> calls.
>
> With auto.create.topics.enable set to fals
Hello Apache Kafka community,
When using new async KafkaProducer, one can register callback with send
calls.
With auto.create.topics.enable set to false, when I try to publish to non
existing topic, I expect callback to complete with
UnknownTopicOrPartitionException. Instead, I get back
"org.apac
Hi,
I understood what I need to do.
I think is not clear though regarding
StreamsConfig.METRICS_REPORTER_CLASSES_CONFIG
Say I decide to use jmxterm which is cli based client which I can easily
use where my streams app is running.
With respect to that what value should I assign it to the
METRICS_RE
Hi Sachin,
You can configure an implementation of org.apache.kafka.common.Metrics.
This is done via StreamsConfig.METRICS_REPORTER_CLASSES_CONFIG
There is a list of jmx reporters here:
https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
I'm sure their are plenty more available on gith
19 matches
Mail list logo