This indicates that your Parser topology is not keeping up with the amount of telemetry that it is consuming. You need to do some performance tuning of the topology.
- How much telemetry are you trying to parse (in events per second)? - If you just send a low volume of telemetry does it work as-is? - What are your current Parser settings? On Mon, Nov 4, 2019 at 1:05 AM updates on tube <[email protected]> wrote: > still the same > > On 2019/11/01 16:52:08, "Yerex, Tom" <[email protected]> wrote: > > I am working from memory so I am not entirely certain, but I think we > had a similar error that was resolved by increasing the JVM heap for > Elasticsearch from the default. In Ambari, under “Advanced > elastic-jvm-options”, the “heap_size” setting. In our environment it is set > to 2048m. > > > > > > > > > > > > > > > > From: updates on tube <[email protected]> > > Reply-To: "[email protected]" <[email protected]> > > Date: Friday, November 1, 2019 at 8:42 AM > > To: "[email protected]" <[email protected]> > > Subject: apache storm error > > > > > > > > worker1.sip.com6700java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.TimeoutException: Failed to update metadata > after 60000 ms. at > org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730) > at > org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483) > at > org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430) > at > org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353) > at org.apache.metron.writer.kafka.KafkaWriter.write(KafkaWriter.java:258) > at > org.apache.metron.writer.BulkWriterComponent.flush(BulkWriterComponent.java:123) > at > org.apache.metron.writer.BulkWriterComponent.applyShouldFlush(BulkWriterComponent.java:179) > at > org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:99) > at > org.apache.metron.parsers.bolt.WriterHandler.write(WriterHandler.java:90) > at org.apache.metron.parsers.bolt.WriterBolt.execute(WriterBolt.java:90) at > org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735) > at > org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466) > at > org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40) > at > org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) > at > org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) > at > org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) > at > org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855) > at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at > clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to > update metadata after 60000 ms. > > > > > > > > >
