Hi, team,
We are using Kafka Connect at present and have encountered a problem with
version 2.1.0 that all connectors kept restarting when one new connector was
added into the cluster which then failed to start due to some network problem
(firewall not open).
And the Connect daemon failed to s
Sounds like a regression to me.
We did change some code to track partition time differently. Can you
open a Jira?
-Matthias
On 6/26/19 7:58 AM, Jonathan Santilli wrote:
> Sure Bill, sure, is the same code I have reported the issue for the
> suppress some months ago:
> https://stackoverflow.com/
On 2019/06/27 15:39:29, Jan Kosecki wrote:
> Hi,
>
> I have a hyperledger fabric cluster that uses a cluster of 3 kafka nodes +
> 3 zookeepers.
> Fabric doesn't support pruning yet so any change to recorded offsets is
> detected by it and it fails to reconnect to kafka cluster.
> Today mornin
Hi,
I have a hyperledger fabric cluster that uses a cluster of 3 kafka nodes +
3 zookeepers.
Fabric doesn't support pruning yet so any change to recorded offsets is
detected by it and it fails to reconnect to kafka cluster.
Today morning all kafka nodes have gone offline and then slowly restarted.
Hi Garvit,
Are the consumers java? If so, can you take a thread dump every five
seconds for 30 seconds total for the affected consumer JVM?
Thanks,
Steve
On Thu, Jun 27, 2019 at 12:00 AM Garvit Sharma wrote:
> I don't think that is the case. The lag is huge ~10^5 records.
>
> On Thu, Jun 27,
Hi Kiran
Without much research my guess would be "num_stream_threads *
(#global_state_stores + sum(#partitions_of_topic_per_local_state_store))"
So 10 stores (regardless if explicitly defined or implicitely because of
some stateful operation) with 10 partitions each should result in 100
Rocksdb ins
On 2019/06/27 09:02:39, Patrik Kleindl wrote:
> Hello Kiran
>
> First, the value for maxOpenFiles is per RocksDB instance, and the number
> of those can get high if you have a lot of topic partitions etc.
> Check the directory (state dir) to see how many there are.
> Start with a low value (1
Please suggest what should I do to get this started logging.
On Thu, Jun 27, 2019 at 4:14 PM Garvit Sharma wrote:
> Brokers logs are not showing up on the intellij console.
>
> [image: image.png]
>
> [image: image.png]
>
> Please help.
>
> Thanks,
>
> On Thu, Jun 27, 2019 at 12:04 PM Garvit Shar
Brokers logs are not showing up on the intellij console.
[image: image.png]
[image: image.png]
Please help.
Thanks,
On Thu, Jun 27, 2019 at 12:04 PM Garvit Sharma wrote:
> Hello All,
>
> I am able to run Kafka in debug mode in IntelliJ CE but the info logs are
> not showing up in the console
Hello Kiran
First, the value for maxOpenFiles is per RocksDB instance, and the number
of those can get high if you have a lot of topic partitions etc.
Check the directory (state dir) to see how many there are.
Start with a low value (100) and see if that has some effect.
Second, because I just fo
On 2019/06/26 21:58:02, Patrik Kleindl wrote:
> Hi Kiran
> You can use the RocksDBConfigSetter and pass
>
> options.setMaxOpenFiles(100);
>
> to all RocksDBs for the Streams application which limits how many are
> kept open at the same time.
>
> best regards
>
> Patrik
>
>
> On Wed, 26 J
11 matches
Mail list logo