I'm using this for sure:
response.setEncoding('utf8')
but the problem is the chunks can be split more than once and with UTF8
strings there doesn't seem to be any character that indicates the buffer
was split. I read that JSON responses have the \n you can use but I don't
see that anywhere in the XML response I'm receiving.
If node.js doesn't put these back together for me, then I need to figure
out what characters are at the end of the buffer to indicate a split chunk
so I can put them back together myself. Currently I'm looking for some
strings in the XML packet to indicate a complete or incomplete chunk but
again, it's not working 100%.
Current code below works 99.99% of the time receiving millions of these
during the day. But occasionally DB2 won't accept the XML, now always with
this error message:
SQL16110N XML syntax error. Expected to find "Comment or PI".
SQLSTATE=2200M
Thanks for the help!!
====================================
request.on('response',function(response) {
response.setEncoding('utf8')
response.on('data',function(data) {
resp_ts = microtime.now().toString()
datal = data.length
datae = data.slice(-8)
if (datal < 8 || (datae != '</trade>' && datae != '</quote>' && datae
!= '/status>')) {
mics = resp_ts % mill
console.log(Date(resp_ts).split("
")[4]+"."+("000000"+mics).slice(-6),'partial')
save = (save + data).toString('utf8')
} else {
str = (save + data).toString('utf8')
save = ''
db.query("insert into p1s values
(?,current_timestamp,?)",[resp_ts,str],function(err, rows) {
.............
====================================
On Friday, December 28, 2012 6:42:33 AM UTC-5, Ben Noordhuis wrote:
>
> On Thu, Dec 27, 2012 at 8:51 PM, am_p1 <[email protected]<javascript:>>
> wrote:
> > Just fyi, I never get an "end" since this is streaming... just "data"
> always
> > and always... and 99.99% of them are not split chunks, ie; they are
> complete
> > and valid XML.
> >
> > So I've got the merging back together mostly working but it's a pain and
> > it's still not 100%, I assume due to me trying to merge them back
> together
> > in UTF8 strings. Maybe I should be doing it with a buffer instead? cause
> I
> > read the chunks could be split in the middle of the UTF8 encoding...
> yikes!!
> >
> > This cleared up most of it but again, still not 100%.
> > str = (save + data).toString('utf8')
> >
> > And shouldn't node.js be handling this somehow? ( he asked not knowing
> > hardly anything about node.js - this is my first post!!! )
> >
> > Thanks for any assistance!!
>
> stream.setEncoding('utf8') will take care of partial multi-byte
> sequences. Your data event listener will receive strings instead of
> buffers.
>
> Alternatively, you can concat the buffers together. The buffertools
> module has a helper for that but it's quite trivial to implement one
> yourself:
>
> // a and b are the input buffers
> var c = new Buffer(a.length + b.length);
> a.copy(c, 0);
> b.copy(c, a.length);
>
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en