So the option I was missing was topolgy.max.tuple.pending.
It was null and so the kafka-spout really went crazy in reading the
messages from kafka.
After setting the above option to a saner value of 1000, I was able to
resolve the ClosedChannelException as well as OutOfMemoryException.
Should we p
Are you using acking and/or do you have back-pressure enabled? Your worker
crashed because it exceeded the GC overhead limit which by default in java
means that you were spending moe than 98% of the time doing GC and only 2% of
the time doing real work. I am rather surprised that the supervisor
Hi,
I am using Storm 1.0.2
My configuration is quite simple: `kafka-spout` feeding to `solr-bolt`
topology.workers = 2
spout.parallelism = 1
bolt.parallelism = 1
Our messages coming from kafka are large: around 100kb per message to max
of 500kb per message.
But I see lots of errors:
Window