Github user revans2 commented on the pull request:

    https://github.com/apache/storm/pull/1443#issuecomment-221620918
  
    @abhishekagarwal87 and @victor-wong 
    
    I am on the fence on this.  Having a spout that is stuck forever is really 
bad, but having it crash, lose data, come back up and repeat the process 
possibly draining the other partitions feels even worse.  I guess if you 
configured your spouts for at most once processing then you got what you asked 
for, even if it was shoot yourself in the foot, and storm is fail fast so it 
fits with that philosophy.
    
    Please at least update the exception message to indicate what the sizes are 
actually set to.  I think this would make life simpler for the user that is in 
this situation, so they see the error message and it says something like.
    
    ```Found a message (10,485,760 bytes) that is larger than the maximum fetch 
size (1,048,576 bytes) in topic myGreatTopic partition 5 at fetch offset 
103404502. Increase the fetch size, or decrease the maximum message size the 
broker will allow and start after this offset."```
    
    Another alternative might be to give the user the option to skip messages 
that are too large, and provide a metric to indicate how many messages/bytes 
have been skipped because of this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to