Hi Mathieu,
What version of Kafka are you using? There was recently a fix that went into
trunk, just checking if you're using an older version.
(to make forward progress you can turn the cache off, like this:
streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
)
Thanks
E
Hey all,
I've just been running a quick test of my kafka-streams application on the
latest Kafka trunk (@e43bbce), and came across this error. I was wondering
if anyone has seen this error before, have any thoughts on what might cause
it, or can suggest a direction to investigate it further.
Ful
Unsubscribe
Sent via the Samsung Galaxy S7, an AT&T 4G LTE smartphone
Original message From: Guozhang Wang
Date: 12/2/16 5:13 PM (GMT-06:00) To: users@kafka.apache.org Subject: Re:
Initializing StateStores takes *really* long for large datasets
Before we have the a single-k
Unsubscribe
Sent via the Samsung Galaxy S7, an AT&T 4G LTE smartphone
Original message From: Guozhang Wang
Date: 12/2/16 5:48 PM (GMT-06:00) To: users@kafka.apache.org Subject: Re:
Kafka windowed table not aggregating correctly
Sachin,
One thing to note is that the retenti
Hi folks,
How I can collect Kafka connect metrics from Confluent? Are there any API to
use?
In addition, if one file is very big, can multiple task working on the same
file simultaneously?
Thanks,
Will
Vincenzo
Nota Bene:
you can *force* resolution of a good version (3.4.8) to be
used by all artifacts downstream from your parent pom.xml..the mechanism is
called
https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Management
Buona Fortuna
Mar
If the data volumes are low or you just want a quick prototype as a proof of
concept you could use existing tools like node-red to connect the various input
protocols with Kafka as an output protocol. For example install from
http://nodered.org then install node-red-contrib-modbus, then install
I found what's wrong, well... finally! Given that consumer
application shoul load received data into a Solr instance.
Incidentally my version of Solr is SolrCloud, and Solrj client use
zookeeper, a different version of zookeeper...
Now I specified in my pom.xml the same version of zookeeper 3.4.8 I
Dear all gurus,
I'm new to Kafka and I'm going to connect the real time data steaming from
power system supervision and control devices to Kafka via different
communication protocols for example Modbus, DNP or IEC61850 and next to Storm
processing system.
I'm wondering how can I get these data
I suppose the topic won't be deleted, but this would be a rare enough
occurrence that there won't be too many dormant topics hanging around.
Alternatively perhaps I can store the undeleted topics somewhere, and
whenever a new node starts, it could check this list and delete them.
On Sat, Dec 3, 2
Not sure. Would need to think about it more. However, default commit
interval in streams is 30 sec. You can configure is via StreamConfig
COMMIT_INTERVAL_MS. So using the additional thread and waiting for 5
minutes sounds ok. Question is, what would happen if the JVM goes down
before you delete the
Is there a way to make sure the offsets got committed? Perhaps, after the
last msg has been consumed, I can setup a task to run after a safe time
(say 5 mins? ) in another thread which would delete the topic? What would
be a safe time to use?
On Sat, Dec 3, 2016 at 3:04 PM, Matthias J. Sax
wrote:
I guess yes. You might only want to make sure the topic offsets got
committed -- not sure if committing offsets of a deleted topic could
cause issue (ie, crashing you Streams app)
-Matthias
On 12/2/16 11:04 PM, Ali Akhtar wrote:
> Thank you very much. Last q - Is it safe to do this from within a
13 matches
Mail list logo