thanks for the quick turnaround.

On Dec 11, 2014, at 4:28 PM, Ori Livneh <o...@wikimedia.org> wrote:

> There's this graph: 
> https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1418343627.977&from=-1weeks&target=movingMedian(diffSeries(eventlogging.overall.raw.rate%2Ceventlogging.overall.valid.rate)%2C20)
>  
> <https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1418343627.977&from=-1weeks&target=movingMedian(diffSeries(eventlogging.overall.raw.rate%2Ceventlogging.overall.valid.rate)%2C20)>
> 
> The key is 
> 'diffSeries(eventlogging.overall.raw.rate,eventlogging.overall.valid.rate)', 
> which gets you the rate of invalid events per second.
> 
> It is not broken down by schema, though.

this is great for monitoring, for QA purposes we really need the raw data

> We can't write invalid events to a database -- at least not the same way we 
> write well-formed events. The table schema is derived from the event schema, 
> so an invalid event would violate the constraints of the table as well.

rrright

> It's possible (and easy) to set something up that watches invalid events in 
> real-time and does something with them. The question is: what? E-mail an 
> alert? Produce a daily report? Generate a graph?
> 
> If you describe how you’d like to consume the data, I can try to hash out an 
> an implementation with Nuria and Christian.

a JSON log like all-events.log but sync’ed from vanadium more frequently would 
do the job for me. It can also be truncated as we probably only need a 
relatively short time window and the complete data is captured in all-events 
anyway.

D
_______________________________________________
Analytics mailing list
Analytics@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/analytics

Reply via email to