Map/reduce aside, in the general case, I do time series in Riak with 
deterministic materialized keys at specific time granularities. Ie. 

/devices/deviceID_YYYYMMDDHHMM[SS]

So my device or app stack will drop data into a one second resolution key (if 
second resolution is needed) into Riak memory backend in a materialized key 
(the combination of ID and timestamp). Then retrieve the data deterministically 
(so you skip the search) at a one minute resolution by issuing a 60 key multi 
get for each second in that minute. Have a trailing process that sweeps memory 
and does larger granularity rollups/analytics/math/whatever and write that to 
memory or disk. All depends on frequency. 

Best,
Alexander 

@siculars
http://siculars.posthaven.com

Sent from my iRotaryPhone

> On Feb 19, 2015, at 19:08, Christopher Meiklejohn <cmeiklej...@basho.com> 
> wrote:
> 
> 
>> On Feb 19, 2015, at 8:01 PM, Fred Grim <fg...@vmware.com> wrote:
>> 
>> Given a specific data blob I want to move a time series into a search
>> bucket.  So first I have to build out the time series and then move it
>> over.  Maybe I should use the rabbitmq post commit hook to send the data
>> somewhere else for the query to be run or something like that?
> 
> Given your scenario, it seems that a portion of these writes would have 
> MapReduce jobs that resulted in nothing happening — I assume you only
> bucket the series every so many writes or time period, correct?
> 
> I’d highly recommend doing this externally, or identifying a method for
> pre-bucket’ing the data given the rate of ingestion.
> 
> - Chris
> 
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklej...@basho.com
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to