I'm already using that approach.
However, the distribution of my metrics require a more precise solution for 
my median.

On Thursday, November 21, 2013 12:50:41 AM UTC-5, Luca de Alfaro wrote:
>
> If you can weigh recent data more than older data, you might consider 
> instead of building a rolling average, an  exponentially decaying weights 
> average. 
>
> You can store in ndb, sharded, total_amount, and total_weight, and 
> timestamp.  
> Then, when you get an update, you compute the decay_factor, which is equal 
> to exp(- time since update / time constant). 
> You then do: 
> total_amount = total_amount * decay_factor + amount_now
> total_weight = total_weight * decay_factor + weight_now
> timestamp = present time
> avg = total_amount / total_weight
>
>
> On Tuesday, November 12, 2013 12:07:34 PM UTC-8, Mathieu Simard wrote:
>>
>> Since there is no appengine solution available such as the Redis atomic 
>> list, I'm left wondering how to implement a cost effective rolling median.
>> Has anyone come up with a solution that would be more convenient than 
>> running a redis instance on Google Compute Engine?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to