vtlim commented on code in PR #12723: URL: https://github.com/apache/druid/pull/12723#discussion_r914205845
########## docs/tutorials/tutorial-sketches-theta.md: ########## @@ -0,0 +1,302 @@ +--- +id: tutorial-sketches-theta +title: Approximations with Theta sketches +sidebar_label: Theta sketches +--- + +A common problem in clickstream analytics is counting unique things, like visitors or sessions. Generally this involves scanning through all detail data, because unique counts **do not add up** as you aggregate the numbers. + +For instance, we might be interested in the number of visitors that watched episodes of a TV show. Let's say we found that at a given day, 1000 unique visitors watched the first episode, and 800 visitors watched the second episode. We may want to explore further trends, for example: +- How many visitors watched _both_ episodes? +- How many visitors are there that watched _at least one_ of the episodes? +- How many visitors watched episode 1 _but not_ episode 2? + +There is no way to answer these questions by just looking at the aggregated numbers. We will have to go back to the detail data and scan every single row. If the data volume is high enough, this may take long, meaning that an interactive data exploration is not possible. + +An additional nuisance is that unique counts don't work well with rollups. For the example above, it would be great if we could have just one row of data per 15 minute interval[^1], show, and episode. After all, we are not interested in the individual user IDs, just the unique counts. + +[^1]: Why 15 minutes and not just 1 hour? Intervals of 15 minutes work better with international timezones because those are not always aligned by hour. India, for instance, is 30 minutes off, and Nepal is even 45 minutes off. With 15 minute aggregates, you can get hourly sums for any of those timezones, too! + +Is there a way to avoid crunching the detail data every single time, and maybe even enable rollup? + +## Fast approximation with set operations: Theta sketches + +Theta sketches are a probabilistic data structure to enable fast approximate analysis of big data. Druid's implementation relies on the [Apache DataSketches](https://datasketches.apache.org/) library. + +Theta sketches have a few nice properties: Review Comment: ```suggestion Theta sketches have a few useful properties: ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
