But my stream definition does not have a state store at all, Rocksdb or in
memory... That's the most concerning part...
On Wed, Jun 7, 2017 at 9:48 PM Sachin Mittal wrote:
> One instance with 10 threads may cause rocksdb issues.
> What is the RAM you have?
>
> Also check CPU wait time. Many rocks
One instance with 10 threads may cause rocksdb issues.
What is the RAM you have?
Also check CPU wait time. Many rocks db instances on one machine (depends
upon number of partitions) may cause lot of disk i/o causing wait times to
increase and hence slowing down the message processing causing frequ
There is one instance with 10 threads.
On Wed, Jun 7, 2017 at 9:07 PM Guozhang Wang wrote:
> João,
>
> Do you also have multiple running instances in parallel, and how many
> threads are your running within each instance?
>
> Guozhang
>
>
> On Wed, Jun 7, 2017 at 3:18 PM, João Peixoto
> wrote:
João,
Do you also have multiple running instances in parallel, and how many
threads are your running within each instance?
Guozhang
On Wed, Jun 7, 2017 at 3:18 PM, João Peixoto
wrote:
> Eno before I do so I just want to be sure this would not be a duplicate. I
> just found the following issue
I have two app instance, input topic has 2 partitions, each instance config
one thread and one replicas.
also, instance1's state-store is /tmp/kafka-streams, instance2's
state-store is /tmp/kafka-streams2.
now I do this experiment to study checkpointin kafka streams (0.10.0.0).
1. start instance1,
If you are setting acks=0 then you don't care about losing data even when the
cluster is up. The only way to get at-least-once is acks=all.
> On Jun 7, 2017, at 1:12 PM, Ankit Jain wrote:
>
> Thanks hans.
>
> It would work but producer will start loosing the data even the Cluster is
> availabl
Eno before I do so I just want to be sure this would not be a duplicate. I
just found the following issues:
* https://issues.apache.org/jira/browse/KAFKA-5167. Marked as being fixed
on 0.11.0.0/0.10.2.2 (both not released afaik)
* https://issues.apache.org/jira/browse/KAFKA-5070. Currently in prog
Indeed, all good points. Thanks all for the continuing valuable feedback!
> On Jun 7, 2017, at 3:07 PM, Matthias J. Sax wrote:
>
> If you write to remote DB, keep in mind that this will impact you
> Streams app, as you loose data locality.
>
> Thus, populating a DB from the changelog might be
If you write to remote DB, keep in mind that this will impact you
Streams app, as you loose data locality.
Thus, populating a DB from the changelog might be better. It also
decouples both systems what give you the advantage that your Streams app
can still run if DB has an issues. If you write dire
Depends, embedded postgress puts you into the same spot.
But if you use your state store change log to materialize into a
postgress; that might work out decently.
Current JDBC doesn't support delete which is an issue but writing a
custom sink is not to hard.
Best Jan
On 07.06.2017 23:47, St
I was actually considering writing my own KeyValueStore backed
by e.g. a Postgres or the like.
Is there some feature Connect gains me that would make it better
than such an approach?
thanks
> On Jun 7, 2017, at 2:20 PM, Jan Filipiak wrote:
>
> Hi,
>
> have you thought about using connect to p
ConsoleConsumer by default uses String deserializer, but value in the
changelog is of type long. For output topic, the type in converted from
long to string though -- thus you can read the output topic without
problems.
For reading the changelog topic, you need to specify option
--property
value
Hi,
have you thought about using connect to put data into a store that is
more reasonable for your kind of query requirements?
Best Jan
On 07.06.2017 00:29, Steven Schlansker wrote:
On Jun 6, 2017, at 2:52 PM, Damian Guy wrote:
Steven,
In practice, data shouldn't be migrating that often.
Hi there,
This might be a bug, would you mind opening a JIRA (copy-pasting below is
sufficient).
Thanks
Eno
> On 7 Jun 2017, at 21:38, João Peixoto wrote:
>
> I'm using Kafka Streams 0.10.2.1 and I still see this error
>
> 2017-06-07 20:28:37.211 WARN 73 --- [ StreamThread-1]
> o.a.k.s.p.int
Hi Eno,
On 07.06.2017 22:49, Eno Thereska wrote:
Comments inline:
On 5 Jun 2017, at 18:19, Jan Filipiak wrote:
Hi
just my few thoughts
On 05.06.2017 11:44, Eno Thereska wrote:
Hi there,
Sorry for the late reply, I was out this past week. Looks like good progress
was made with the discus
Hi Steven,
You are right in principle. The thing is that what we shipped in Kafka is just
the low level bare bones that in a sense belong to Kafka. A middle layer that
keeps track of the data is absolutely needed, and it should hopefully hide the
distributed system challenges from the end user.
Comments inline:
> On 5 Jun 2017, at 18:19, Jan Filipiak wrote:
>
> Hi
>
> just my few thoughts
>
> On 05.06.2017 11:44, Eno Thereska wrote:
>> Hi there,
>>
>> Sorry for the late reply, I was out this past week. Looks like good progress
>> was made with the discussions either way. Let me rec
I'm using Kafka Streams 0.10.2.1 and I still see this error
2017-06-07 20:28:37.211 WARN 73 --- [ StreamThread-1]
o.a.k.s.p.internals.StreamThread : Could not create task 0_31. Will
retry.
org.apache.kafka.streams.errors.LockException: task [0_31] Failed to lock
the state directory for t
Thanks hans.
It would work but producer will start loosing the data even the Cluster is
available.
Thanks
Ankit Jain
On Wed, Jun 7, 2017 at 12:56 PM, Hans Jespersen wrote:
> Try adding props.put("max.block.ms", "0");
>
> -hans
>
>
>
> > On Jun 7, 2017, at 12:24 PM, Ankit Jain wrote:
> >
> > H
Try adding props.put("max.block.ms", "0");
-hans
> On Jun 7, 2017, at 12:24 PM, Ankit Jain wrote:
>
> Hi,
>
> We want to use the non blocking Kafka producer. The producer thread should
> not block if the Kafka is cluster is down or not reachable.
>
> Currently, we are setting following prop
Can someone please help me with this error, this happening after upgrade
from 0.8.2 to 0.10.2.1.
It seem an issue with my consumers but I cannot determine what is happening.
INFO Kafka commitId : e89bffd6b2eff799
(org.apache.kafka.common.utils.AppInfoParser)
[2017-06-07 12:24:45,497] INFO [mirro
Hi,
We want to use the non blocking Kafka producer. The producer thread should
not block if the Kafka is cluster is down or not reachable.
Currently, we are setting following properties but the Producer thread is
still blocking if the Kafka cluster goes gown or unreachable.
* props.put("block.on
Hi All ,
Thanks a lot for your help .
A bug has been logged for said issue and can be found at ,
https://issues.apache.org/jira/browse/KAFKA-5401
Thanks again .
On Sun, Jun 4, 2017 at 6:38 PM, Martin Gainty wrote:
>
>
> From: IT Consultant <0binarybudd...@gm
Thank you for the idea, I'll keep that in mind if I run into limitations of
my current approach.
> On Jun 6, 2017, at 5:50 PM, Guozhang Wang wrote:
>
> Thanks Steven, interesting use case.
>
> The current streams state store metadata discovery is assuming the
> `DefaultStreamPartitioner` is use
the right way to see changelog persistent by rocksdb is use ByteDeser, and
then decode hex to string
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.ByteArrayDeserializer");
for(ConsumerRecord record: consumerRecords) {
print bytesToHexString(re
Hi,
I have followed the instructions you detail and I could create topics,
which were getting a leader and were properly replicated.
I think the problem I experienced was due to some old temporary
communication problems between Kafka and Zookeeper. But that's only a guess.
Thanks a lot Mohammed f
Mahendra,
Did increasing those two properties do the trick? I am running into this
exact issue testing streams out on a single Kafka instance. Yet I can
manually start a consumer and read the topics fine while its busy doing
this dead stuffs.
On Tue, May 23, 2017 at 12:30 AM, Mahendra Kariya <
I add some log on StoreChangeLog
for (K k : this.dirty) {
V v = getter.get(k);
log.info("logChange key:{},value:{}", k, v);
collector.send(new ProducerRecord<>(this.topic, this.partition, k,
v), keySerializer, valueSerializer);
}
and found the print result is normal, just some byte:
I'm running WordCountProcessorDemo with Processor API. and change something
below
1. config 1 stream-thread and 1 replicas
2. change inMemory() to persistent()
MyKakfa version is 0.10.0.0. After running streaming application, I check
msg output by console-consumer
➜ kafka_2.10-0.10.0.0 bin/kafka-c
I tried to use a TimestampExtractor that uses our timestamps from the
messages, and use a 'map' operation on the KTable to set it to current, to
have a precise point where I discard our original timestamps. That does not
work, (I verified by writing a separate java Kafka Consumer and spit out
the t
Hi,
Is it possible to give ACLs for a regular expression for group names ?
For example ..I want to give Read access for all group names with prefix
DNS*
Thanks
31 matches
Mail list logo