Hi all,
I'm pretty new to storm and kafka/zookeeper, and I hope that my question is
not to dumb. Here it goes:
I'm using latest stable storm and storm-kafka = 0.9.2-incubating
I've setup test cluster using wirbelsturm tool with unchanged yaml (just
uncommented kafka machine)
here is config snip
Try lowering setMaxSpoutPending(10) to a much lower value (like 10). In
Trident, setMaxSpoutPending referns to the number of batches, not tuples
like in plain Storm. Too high values may cause blockages like the one you
describe.
On Tuesday, July 8, 2014, Miloš Solujić wrote:
> Hi all,
>
> I'
Thanks Danijel for your quick proposition.
I tried lowering down and removing all performance settings (those were
left from load testing on one machine)
Still same result: no matter what, new messages are not taken from kafka
after topology is redeployed.
On Tue, Jul 8, 2014 at 6:15 PM, Danije
Are you sure you are producing new messages into the same Kafka topic? What
number did you set maxSpoutPending to?
On Tuesday, July 8, 2014, Miloš Solujić wrote:
> Thanks Danijel for your quick proposition.
>
> I tried lowering down and removing all performance settings (those were
> left from l
Yep. pretty much sure. Via internal kafka-producer.sh
same method is used to produce initial messages (before first launch of
topology, that got consumed and processed just fine)
as for maxSpoutPending first I tried with 10, than removed it (left default
value)
On Tue, Jul 8, 2014 at 6:31 PM, Da
I'd double check the Kafka producer to make sure those messages are really
getting into the right Kafka topic. Also,
try enabling Config.setDebug(true) and monitoring the Kafka spout's
activity in the logs. setMaxSpoutPending should always be set, as by
default it is unset, so you risk internal que
Also, you should paste all your worker logs (worker-*.log files).
On Tuesday, July 8, 2014, Danijel Schiavuzzi wrote:
> I'd double check the Kafka producer to make sure those messages are really
> getting into the right Kafka topic. Also,
> try enabling Config.setDebug(true) and monitoring the K
Yep, I did double checked.
Here is how it's done:
#create topic
/opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper1:2181
--replication-factor 1 --partition 1 --topic scores
#check what is created
/opt/kafka/bin/kafka-topics.sh --zookeeper zookeeper1:2181 --describe
--topic scores
#pr
Very strange. Could you try deleting Trident's data in Zookeeper:
$ sh zkCli.sh
rmr /transactional
and then resubmitting the topology and repeating your test scenario?
Maybe the the spout's data in Zookeeper got somehow corrupted because you
are setting forceFromStart in the spout, and resubmitt
Thanks Danijel for taking interest in my problem.
Exactly same feeling I've got (that zookeeper data is corrupted) So I
purged info about it via zkCli.sh
Now I've got some lower level issues:
2014-07-10 11:00:13 b.s.d.worker [INFO] Worker
04a17a6b-5aea-47ce-808b-218c4bcc1d51 for storm
tridentOpa
Did you kill your topology before clearing the Zookeeper data?
On Jul 10, 2014 1:24 PM, "Miloš Solujić" wrote:
>
> Thanks Danijel for taking interest in my problem.
>
> Exactly same feeling I've got (that zookeeper data is corrupted) So I
purged info about it via zkCli.sh
>
> Now I've got some lo
Yes
On 10 Jul 2014 14:03, "Danijel Schiavuzzi" wrote:
> Did you kill your topology before clearing the Zookeeper data?
>
> On Jul 10, 2014 1:24 PM, "Miloš Solujić" wrote:
> >
> > Thanks Danijel for taking interest in my problem.
> >
> > Exactly same feeling I've got (that zookeeper data is corru
That's right, newStream() stream (spout) names in your topologies must be
unique cluster-wide (because they are included in the Zookeeper node path),
otherwise data corruption may occur as multiple Trident spouts access and
modify the same Zookeeper data.
The ConnectionException seems to be throws
13 matches
Mail list logo