[jira] [Commented] (STORM-2359) Revising Message Timeouts

2018-12-21 Thread JIRA


[ 
https://issues.apache.org/jira/browse/STORM-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726614#comment-16726614
 ] 

Stig Rohde Døssing commented on STORM-2359:
---

{quote}Acker will be totally clueless if all acks from downstream bolts are 
also lost for the same tuple tree.{quote}

Yes. This is why the sync tuple is needed. If the spout executor doesn't time 
out the tree on its own, then we need a mechanism to deal with trees where the 
acker isn't aware of the tree due to message loss of the init and ack/fail 
messages. If we don't do a sync, the spout will never know that the acker isn't 
aware of the tree, and the tree will never fail.

{quote} 1) Handle timeouts in SpoutExecutor. It sends a timeout msg to ACKer. 
*+Benefit+*: May not be any better than doing in ACKer. Do you have any 
thoughts about this ?{quote}

I don't think there's any benefit to doing this over what we're doing now. 
Currently the spout executor times out the tuple, and the acker doesn't try to 
fail tuples on timeout. Instead it just quietly discards whatever information 
it has about a tree once the pending map rotates a few times. I'm not sure what 
we'd gain from the acker not rotating pending and relying on a timeout tuple 
instead. The benefit as I see it of moving the timeouts to the acker would be 
the ability to reset timeouts more frequently (e.g. on every ack) without 
increasing load on the spout executor, which we can't do if the spout is still 
handling the timeout.

{quote} 2) Eliminate timeout communication between Spout & ACK. Let each does 
its own timeout.{quote}

This is how it works now. As you note, there are some benefits to doing it this 
way. An additional benefit is that we can easily reason about the max time to 
fail a tuple on timeout, since the tuple will fail as soon as the spout rotates 
it out of pending. The drawbacks are:
 * Any time the timeout needs to be reset, a message needs to go to both the 
acker and the spout (current path is bolt -> acker -> spout)
 * Since resetting timeouts is reasonably expensive, we don't do it as part of 
regular acks, the user has to manually call collector.resetTimeout().

The benefit of moving timeouts entirely to the acker is that we can reset 
timeouts automatically on ack. This means that the tuple timeout becomes 
somewhat easier to work with, since tuples won't time out as long as an edge in 
the tree is getting acked occasionally.

This is currently more of a loose idea, but in the long run I'd like to try 
adding a feature toggle for more aggressive automatic timeout resets. I'm not 
sure what the performance penalty would be, but it would be nice if the bolts 
could periodically reset the timeout for all queued/in progress tuples (tuples 
received by the worker, but not yet acked/failed by a collector). Doing this 
would eliminate the degenerate case you mention in the design doc, where some 
bolt is taking slightly too long to process a tuple, and the queued tuples time 
out, which causes the spout to reemit, which puts the reemits in queue behind 
the already timed out tuples. In some cases this can cause the topology to 
spend all its time processing timed out tuples, preventing it from making 
progress, even though each individual tuple could have been processed within 
the timeout.

If this idea turns out to be possible to implement without adding a large 
overhead, it would add an extra drawback to timeouts in the spout:
 * We can't do automatic timeout resetting, since flooding the spout with reset 
messages is a bad idea. In particular we can't reset timeouts for messages that 
are sitting in bolt queues. The user can't reset these timeouts manually either.

> Revising Message Timeouts
> -
>
> Key: STORM-2359
> URL: https://issues.apache.org/jira/browse/STORM-2359
> Project: Apache Storm
>  Issue Type: Sub-task
>  Components: storm-core
>Affects Versions: 2.0.0
>Reporter: Roshan Naik
>Assignee: Stig Rohde Døssing
>Priority: Major
> Attachments: STORM-2359.ods
>
>
> A revised strategy for message timeouts is proposed here.
> Design Doc:
>  
> https://docs.google.com/document/d/1am1kO7Wmf17U_Vz5_uyBB2OuSsc4TZQWRvbRhX52n5w/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (STORM-3276) Can't run Flux with Storm 2.0.0

2018-12-21 Thread Robert Joseph Evans (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved STORM-3276.

   Resolution: Fixed
Fix Version/s: 2.0.0

> Can't run Flux with Storm 2.0.0
> ---
>
> Key: STORM-3276
> URL: https://issues.apache.org/jira/browse/STORM-3276
> Project: Apache Storm
>  Issue Type: Bug
>  Components: Flux
>Affects Versions: 2.0.0
>Reporter: Julien Nioche
>Assignee: Robert Joseph Evans
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> I try to run a Flux-based topology with Storm 2.0.0
>  
> _apache-storm-2.0.0/bin/storm local target/2-1.0-SNAPSHOT.jar 
> org.apache.storm.flux.Flux crawler.flux --local-ttl 999_
>  
> Am getting 
>  
> _17:41:22.191 [main] ERROR o.a.s.f.Flux - To run in local mode run with 
> 'storm local' instead of 'storm jar'_
> _17:41:22.191 [main] INFO o.a.s.LocalCluster -_
> _RUNNING LOCAL CLUSTER for 999 seconds._
> and nothing happens after that. 
>  
> The documentation for Flux 
> [http://storm.apache.org/releases/2.0.0-SNAPSHOT/flux.html] still mentions 
> using 'storm jar' as well as --local and --sleep.
> My test topology can be found at [https://github.com/DigitalPebble/storm2] 
> and requires the branch 2.x of StormCrawler 
> [https://github.com/DigitalPebble/storm-crawler/tree/2.x] to by installed.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (STORM-2925) org.apache.storm.sql.TestStormSql consistently fails

2018-12-21 Thread JIRA


[ 
https://issues.apache.org/jira/browse/STORM-2925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16727006#comment-16727006
 ] 

Stig Rohde Døssing commented on STORM-2925:
---

[~kabhwan] Is this still an issue? The tests look green.

> org.apache.storm.sql.TestStormSql consistently fails
> 
>
> Key: STORM-2925
> URL: https://issues.apache.org/jira/browse/STORM-2925
> Project: Apache Storm
>  Issue Type: Sub-task
>  Components: storm-sql
>Affects Versions: 2.0.0
>Reporter: Jungtaek Lim
>Priority: Critical
>
> 1. java.lang.RuntimeException: java.lang.IllegalStateException: It took over 
> 6ms to shut down slot Thread[SLOT_1024,5,main]
> [https://travis-ci.org/apache/storm/jobs/335344937]
>  
> 2. [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project storm-sql-core: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [https://travis-ci.org/apache/storm/jobs/334308676]
>  
> I've skimmed a bit and looks like twos are same issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)