Answers in-line below, although I'm not sure you're going to like them...

On 08/04/2015 03:54 AM, Dieter Plaetinck wrote:
Hello everyone,

I'm interested in using heka for our metrics pipeline.
Basically, there would be 3 stages:
1. ingest: publicly exposed endpoints that authenticate sessions, take in 
metrics data, and pump it into rabbitmq
2. pull data out of rabbit as quickly as possible (i hear it doesn't do well with large 
buffers of data and "slow acks") and safely (disk buffer backed) move data into 
kairosdb. this is where heka would fit in
3. kairosdb (cassandra based) is our timeseries storage system.

long term I think it'd be nice to take out as many moving parts (our own ingest 
endpoints, rabbitmq, kairosdb) and just put heka straight in between the user 
and cassandra, but that would be some future project.

I have started work on a heka output and encoder plugin for kairosdb's rest 
endpoint.
Is a custom output really needed? Ideally you'd just use the standard 
HttpOutput.
I have a few questions though:

1. in case of any failures, we rather have duplicate deliveries than lost 
messages.  how do we minimize lost messages?
Keep channel sizes small, use buffering where you can. But Heka does not 
currently offer any concrete guarantees that messages will not be lost.
   is it true that every piece of the pipeline (rabbitmq input, router, 
kairosdb output) has a chan with buffer of 50?
In general, decoders, the router, message matchers, filters, and outputs all 
have input channels. The size of these channels is configurable with a global 
`plugin_chansize` setting which defaults to 30.

As of 0.9, Heka provides a `synchronous_decode` setting, available to all input 
plugins, which causes the decoder to be invoked synchronously from within the 
input's main execution goroutine. This gets rid of the decoder's input channel, 
usually in exchange for a bit of performance.

As of 0.10, Heka supports disk buffering for any filter or output plugin, which 
replaces the plugin's input channel with a disk buffer.

Note that even if both of the above options are chosen, there are still 
buffered channels in use feeding the router and the message matchers.
   or can we just ack to rabbit once the message has been synced to disk, and 
how long should this take, and how many messages could be in flight (out of 
rabbit but not acked yet. i heard rabbit doesn't deal with this too well)?
There's currently no support for delaying the RabbitMQ ack until the received 
message has been written to disk. All AMQP interactions happen in the 
AMQPInput. Disk buffering, if it happens, doesn't occur until between the 
message matcher and the filter or output plugin.
   I couldn't find any config option to the disk buffer that controls how 
often/after how much data sync() is called.
There's no relationship btn filter / output disk buffering and the details of 
any particular input plugin.
2. after a long kairosdb outage, with a full disk buffer, is it possible to 
prioritize delivery of real-time messages and backfill historical data at a 
rate that is either
   automatically managed based on kairosdb backpressure or operator-controlled? 
Ideally we'ld like to ensure real-time messages always get delivered properly 
and historical
   data gets backfilled using "spare capacity".  From what I can see, 
everything goes through the disk buffer in a FIFO style.
You're correct in your assessment that this isn't currently supported. Heka 
delivers messages in the order they were received.
  This is not a showstopper though,
   I think if this scenario manifests itself, we can just spin up new heka's 
and send all new realtime data to them and launch the old heka's with full disk 
buffers in such
   a way they receive no data and send data at a limited rate configured as 
part of the kairos output plugin or by timing how long the posts take.
Yes, this seems like it would work.
3. if we have a heka in "send-from-disk-buffer-only" mode, we should be able to 
safely kill and restart it right? assuming the kairosdb output is properly written to 
update the
   cursor if it gets the ack from kairosdb.
Yes, this seems correct.
4. disk buffer full_action shutdown: the docs say "Heka will stop all processing and 
attempt a clean shutdown", does this generally work fine, or are you aware of cases 
where this can cause trouble?
The generic disk buffering code is new, which is why 0.10.0 is still in a beta 
cycle. This has worked in my initial testing, but it hasn't seen lots of real 
world use yet, so YMMV. I'd love to have folks exercising the code and 
discovering / reporting issues.
5. is it possible for the disk buffer to corrupt (bad hard drive/fs/cosmic rays 
aside)
Um... yes? I know enough to not tempt the fates by answering "no" to that question. AFAIK 
we haven't seen any instances of the buffer getting corrupted thus far, and a lot of the buffering 
code is the same as what was used internally to the TcpOutput in 0.9 and earlier, so I'm hopeful 
that this won't be a significant issue, but "possible" is a pretty low bar.
6. any other potential gotcha's to consider?
Just that Heka is *not* claiming to offer full delivery guarantees at this 
time. We're trying to be explicit about the risks and trade-offs. There are 
some design decisions that have made improving the situation here unfortunately 
challenging. This is part of the reason why we've been exploring alternate 
designs in such projects as Hindsight (https://github.com/trink/hindsight), 
which uses disk buffers at every stage, so there's only ever one message in 
memory at every step of the pipeline.

Hope this helps, and sorry the news isn't all good.

-r

_______________________________________________
Heka mailing list
[email protected]
https://mail.mozilla.org/listinfo/heka

Reply via email to