+1
On Wed, 1 Mar 2017 at 07:15, Guozhang Wang wrote:
> Hey Steven,
>
> That is a good question, and I think your proposal makes sense. Could you
> file a JIRA for this change to keep track of it?
>
> Guozhang
>
> On Tue, Feb 28, 2017 at 1:35 PM, Steven Schlansker <
> sschlans...@opentable.com> wr
Contact the author via github if the readme is not clear.
-hans
Sent from my iPhone
> On Mar 1, 2017, at 7:01 AM, VIVEK KUMAR MISHRA 13BIT0066
> wrote:
>
> Hi All,
>
> I want to use kafka-connect-salesforce but i am not able to use it .
> can any one provide steps to how to use it.
>
> Than
Hey Steven,
That is a good question, and I think your proposal makes sense. Could you
file a JIRA for this change to keep track of it?
Guozhang
On Tue, Feb 28, 2017 at 1:35 PM, Steven Schlansker <
sschlans...@opentable.com> wrote:
> Hi everyone, running with Kafka Streams 0.10.2.0, I see this e
You are right. If all replicas happen to fail at the same time, then even
with ack=everyone your acked sent messages may still be lost. As Kafka
replication documents stated, with N replicas it can tolerate N-1
concurrent failures, hence if you are really unlucky and get N concurrent
failures then
Hi All,
I want to use kafka-connect-salesforce but i am not able to use it .
can any one provide steps to how to use it.
Thank you.
Hi,
Maybe you are looking for something like https://github.com/uber/chaperone ?
See also:
* https://issues.apache.org/jira/browse/KAFKA-260
*
https://sematext.com/blog/2016/06/07/kafka-consumer-lag-offsets-monitoring/
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Ela
Hi Guozhang,
Thanks for your reply. I’m still confused. I checked the source code.Kafka just
uses the class FileChannel.write(buffer) to write the data on the broker, it
puts the data in the memory instead of disk. Only you set the
FileChannel.force(true), it will flush the data to the disk. I
Adding partitions:
You should not add partitions at runtime -- it might break the semantics
of your application because is might "mess up" you hash partitioning.
Cf.
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowtoscaleaStreamsapp,i.e.,increasenumberofinputpartitions?
If you are s
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
positionevent-6 due to 30003 ms has passed since batch creation plus linger
time
Sorry. Miss understood your question.
For a non-logged store, in case of failure, we wipe out the entire state
(IIRC) -- thus, you will start with an empty state after recovery.
-Matthias
On 2/28/17 1:36 PM, Steven Schlansker wrote:
> Thanks Matthias for this information. But it seems you are
Thanks Matthias for this information. But it seems you are talking about a
logged store, since you mention the changelog topic and replaying it and
whatnot.
But my question specifically was about *unlogged* state stores, where there is
no
such changelog topic available. Sorry if that wasn't cl
Hi everyone, running with Kafka Streams 0.10.2.0, I see this every commit
interval:
2017-02-28T21:27:16.659Z INFO <> [StreamThread-1]
o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Committing
task StreamTask 1_31
2017-02-28T21:27:16.659Z INFO <> [StreamThread-1]
o.a.k.s.p.in
Hello,
I have a few questions that I couldn't find answers to in the documentation:
* Can added partitions be auto-discovered by kafka-streams? In my informal
tests I have had to restart the stream nodes.
* Is it possible to rewind the consumer for a particular topic-partitions.
e.g.
We already did some improvements for 0.10.2 that was release last week.
We plan more improvements for 0.10.3/0.11.0 (whatever the next release
number will be).
Feedback like this is very valuable to us! We are aware of some issues,
but rely on people to report problems and ideas on how to improve
Thanks Mathias. I read a few weeks ago that there was some enhancements
planned on the reblancing feature for the next release. It would be a great
addition !
-Nicolas
2017-02-28 18:15 GMT+01:00 Matthias J. Sax :
> Hi Nicolas,
>
> an optimization like this would make a lot of sense. We did have
Tainji,
Streams provides at-least-once processing guarantees. Thus, all
flush/commits must be aligned -- otherwise, this guarantee might break.
-Matthias
On 2/28/17 6:40 AM, Damian Guy wrote:
> Hi Tainji,
>
> The changelogs are flushed on the commit interval. It isn't currently
> possible to
Hi Nicolas,
an optimization like this would make a lot of sense. We did have some
discussions around this already. However, it's more tricky to do than it
seems at a first glace. We hope to introduce something like this for the
next release.
-Matthias
On 2/28/17 9:10 AM, Nicolas Fouché wrote:
If a store is backed by a changelog topic, the changelog topic is
responsible to hold the latest state of the store. Thus, the topic must
store the latest value per key. For this, we use a compacted topic.
If case of restore, the local RocksDB store is cleared so it is empty,
and we read the compl
Hello,
We are running kafka 0.9.0.1 and using the simple consumer to consume
from a topic with 8 partitions. The consumer JVM infrequently runs into
large gc pauses (60s to 90s, stop the world gc). These gcs are unrelated
to kafka. We are usually consuming at 5k messages/sec on the topic.
Rig
Hi,
I have 10 Kafka Streams processes which consume a topic with 10 partitions,
with a few changelog topics.
Let's say these processes are all stopped, and I start them nearly at the
same time (in a matter of seconds).
The first process seems to start initializing all state stores, which takes
1
Thanks Ismael,
I've created https://issues.apache.org/jira/browse/KAFKA-4814
Kind regards,
Stevo Slavic.
On Tue, Feb 28, 2017 at 5:26 PM, Ismael Juma wrote:
> Hi Stevo,
>
> That looks like a bug, can you please file a JIRA?
>
> Ismael
>
> On Mon, Feb 27, 2017 at 3:03 PM, Stevo Slavić wrote:
>
Hi Stevo,
That looks like a bug, can you please file a JIRA?
Ismael
On Mon, Feb 27, 2017 at 3:03 PM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> There's nice documentation on enabling ZooKeeper security on an existing
> Apache Kafka cluster at
> https://kafka.apache.org/documentat
> On Feb 28, 2017, at 12:17 AM, Michael Noll wrote:
>
> Sachin,
>
> disabling (change)logging for state stores disables the fault-tolerance of
> the state store -- i.e. changes to the state store will not be backed up to
> Kafka, regardless of whether the store uses a RocksDB store, an in-memor
Hi Tainji,
The changelogs are flushed on the commit interval. It isn't currently
possible to change this.
Thanks,
Damian
On Tue, 28 Feb 2017 at 14:00 Tianji Li wrote:
> Hi Guys,
>
> Thanks very much for your help.
>
> A final question, is it possible to use different commit intervals for
> st
Hi Vishnu,
I'd suggest you take a look at the broker configuration value
"offsets.retention.minutes".
The consumer offsets are stored in the __consumer_offsets topic.
__consumer_offsets is a compacted topic (cleanup.policy=compact), where
the key is the combination of the consumer group, the top
Hi Guys,
Thanks very much for your help.
A final question, is it possible to use different commit intervals for
state-store change-logs topics and for sink topics?
Thanks
Tianji
Actually my data sources are salesforce and mailchimp. i have developed an
api that will fetch data from there but now i want that if any changes are
happening in data of salesforce and mailchimp sources then that changes
should be reflected in my topic data.
On Tue, Feb 28, 2017 at 5:53 PM, Phili
watch some videos from Ewan Cheslack-Postava.
https://www.google.ca/webhp?sourceid=chrome-instant&rlz=1C5CHFA_enCA523CA566&ion=1&espv=2&ie=UTF-8#q=youtube+ewan+Cheslack-Postava&*
On Tue, Feb 28, 2017 at 6:55 AM, VIVEK KUMAR MISHRA 13BIT0066 <
vivekkumar.mishra2...@vit.ac.in> wrote:
> Hi All,
>
>
Hi All,
What is the use of kafka-connect.
Sachin,
disabling (change)logging for state stores disables the fault-tolerance of
the state store -- i.e. changes to the state store will not be backed up to
Kafka, regardless of whether the store uses a RocksDB store, an in-memory
store, or something else
> When disabling this in 0.10.2 what d
30 matches
Mail list logo