On Sat, Jun 28, 2008 at 9:17 PM, David King <[EMAIL PROTECTED]> wrote: > But in the example I gave (that I got from the CouchDB wiki), there's no way > for the reduction to be accurate in the face of deletions and modifications > without re-calculating it for every single item in the database. That is, > given a database of 5 million rows, every time one is modified or deleted > (or possibly even added, depending on implementation), all 5 million rows > have to pass through that function >
Now you're getting to the technical part. This quote from Damien is the best I can do for you: http://damienkatz.net/2008/02/incremental_map_1.html ... in this design, the reductions happen at index-update time, and the reductions are stored directly inside the inner nodes of the view b+tree index. Then at query time, the intermediate results are reduced to their final result. The number of reductions that happen at query time are logarithmic with respect to the number of matching key/values. -- Chris Anderson http://jchris.mfdz.com
