Just to follow up. We narrowed down our corruption to our Big Query plugin.

A pointer declaration outside of a loop instead of inside said loop was
causing values from the last loop iteration to survive.

On Thu, Apr 9, 2015 at 6:22 PM, David Birdsong <[email protected]>
wrote:

> Sorry, I mis-described. The sandbox is a filter that emits new messages to
> the router. The output plugin matches on the new message's Type field:
>
> message_matcher = 'Type  == "heka.sandbox.haproxy_to_bq"'
>
> On Thu, Apr 9, 2015 at 6:20 PM, David Birdsong <[email protected]>
> wrote:
>
>> I'm trying to root out some weird data corruption that our goog big query
>> output plugin is introducing, but is remedied by a restart.
>>
>> Google Big Query wants buffered rows of json and so I found using the
>> sandbox to encode and also buffer to be the shortest path to this.
>>
>> Can someone spot check my script with particular focus on line 20:
>>
>> https://gist.github.com/davidbirdsong/fb20631488f7136b6e18#file-lua_buffer_json-L20
>>
>> Is my use of read_next_field proper?
>>
>
>
_______________________________________________
Heka mailing list
[email protected]
https://mail.mozilla.org/listinfo/heka

Reply via email to