> On Feb 19, 2015, at 8:01 PM, Fred Grim <fg...@vmware.com> wrote:
> 
> Given a specific data blob I want to move a time series into a search
> bucket.  So first I have to build out the time series and then move it
> over.  Maybe I should use the rabbitmq post commit hook to send the data
> somewhere else for the query to be run or something like that?

Given your scenario, it seems that a portion of these writes would have 
MapReduce jobs that resulted in nothing happening — I assume you only
bucket the series every so many writes or time period, correct?

I’d highly recommend doing this externally, or identifying a method for
pre-bucket’ing the data given the rate of ingestion.

- Chris

Christopher Meiklejohn
Senior Software Engineer
Basho Technologies, Inc.
cmeiklej...@basho.com
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to