Related to redelivery, it depends on the granularity of what you are
comfortable replaying.
If the whole process can easily be replayed, you can allow the http
timeout/failure to fail the tuple, and track that tuple in your spout for
replaying later.
If you don’t want the whole process to
Hi -
We are seeing workers dying and restarting quite a bit, apparently from netty
connection issues.
For example, the log below shows:
* Reconnect for worker at 121:6700
* connection established to 121:6700
* closing connection to 121:6700
* Reconnect started to 121:6700
all within 1 second.
, 2014 at 2:06 PM, Tyson Norris tnor...@adobe.com
mailto:tnor...@adobe.com wrote:
Hi -
We are seeing workers dying and restarting quite a bit,
apparently from netty connection issues.
For example, the log below shows:
* Reconnect for worker at 121:6700
Hi -
I am trying to determine if we can support the following use case for
dynamically adjusting our cluster at runtime:
* when new node is added, existing tasks can be scheduled
* when node is removed, existing tasks are rescheduled to remaining nodes
This seems to work already, so if I want
07eede9ec72329fe2cad893d087541b583e11148
-Cody
On Wed, May 28, 2014 at 10:39 AM, Tyson Norris
tnor...@adobe.commailto:tnor...@adobe.com wrote:
Thanks Cody -
I tried the BrightTag fork and still have problems with storm 0.9.1-incubating
and kafka 0.8.1, I get an error with my trident topology (haven’t tried
non-trident
have told me to use it, but I can't find any
documentation on it or any resources on how to use it.
Thanks
On Thu, May 29, 2014 at 12:06 AM, Tyson Norris
tnor...@adobe.commailto:tnor...@adobe.com wrote:
Hi -
Thanks - it turns out that the JSON parsing is actually fine with HEAD,
although
what's available.
On Wed, May 28, 2014 at 3:55 AM, Tyson Norris
tnor...@adobe.commailto:tnor...@adobe.com wrote:
Do Trident variants of kafka spouts do something similar?
Thanks
Tyson
On May 27, 2014, at 3:19 PM, Harsha
st...@harsha.iomailto:st...@harsha.io wrote:
Raphael,
kafka spout
Do Trident variants of kafka spouts do something similar?
Thanks
Tyson
On May 27, 2014, at 3:19 PM, Harsha st...@harsha.io wrote:
Raphael,
kafka spout sends metrics for kafkaOffset and kafkaPartition you can
look at those by using LoggingMetrics or setting up a ganglia. Kafka uses
from a bolt
then disappearing at boundary to another worker (when it should be getting
routed to another bolt)?
Thanks
Tyson
On Mar 28, 2014, at 2:15 PM, Tyson Norris
tnor...@adobe.commailto:tnor...@adobe.com wrote:
Hi -
I see the same problem when running a single node with nimbus