[ 
https://issues.apache.org/jira/browse/THRIFT-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14047883#comment-14047883
 ] 

Wade Simmons commented on THRIFT-1754:
--------------------------------------

This may be fixed with THRIFT-2591, which updates "residual" to point to the 
next frame before running the callback. This means that if the callback throws 
an exception, the next time the receiver runs it will resume at the next frame.

> RangeError in buffer handling
> -----------------------------
>
>                 Key: THRIFT-1754
>                 URL: https://issues.apache.org/jira/browse/THRIFT-1754
>             Project: Thrift
>          Issue Type: Bug
>          Components: Node.js - Library
>    Affects Versions: 0.9
>         Environment: Ubuntu 12.04, Node.js v0.8.8
>            Reporter: Vesa Poikajärvi
>            Priority: Minor
>
> I have a Node.js service that connects to multiple Thrift servers (using 
> TFramedTransport with C++ servers and TBufferedTransport with Python 
> servers). Every now and then for reasons rather hard to track the following 
> happens:
> {noformat}
> buffer.js:242
>       this.parent = new SlowBuffer(this.length);
>                     ^
> RangeError: length > kMaxLength
>     at new Buffer (buffer.js:242:21)
>     at Socket.TFramedTransport.receiver 
> (/home/me/my_service/node_modules/thrift/lib/thrift/transport.js:59:17)
>     at Socket.EventEmitter.emit (events.js:88:17)
>     at TCP.onread (net.js:395:14)
> {noformat}
> Node module is extracted from Thrift 0.9 tarball due to THRIFT-1637. And as 
> mentioned, I cannot really tell what triggers the behavior. When running in 
> development mode I use [Forever|https://github.com/nodejitsu/forever] to 
> relaunch a crashed process, and when it first crashes because of this it will 
> keep relaunching a few times, maybe ten or so (I connect to the services upon 
> startup), and then it starts working again.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to