On Thu, Oct 16, 2008 at 2:54 PM, Josh Berkus <[EMAIL PROTECTED]> wrote: > Tom, > >> (I'm not certain of how to do that efficiently, even if we had the >> right stats :-() > > I was actually talking to someone about this at pgWest. Apparently there's > a fair amount of academic algorithms devoted to this topic. Josh, do you > remember who was talking about this?
Actually, it was me :) My biggest concern at the time was finding ways to compress the correlation data, on the apparently fairly tenuous assumption that we'd somehow be able to make use of it. As it turns out, the particular set of algorithms I had in mind don't compress anything, but there are other methods that do. Most of the comments on this thread have centered around the questions of "what we'd store" and "how we'd use it", which might be better phrased as, "The database assumes columns are independent, but we know that's not always true. Does this cause enough problems to make it worth fixing? How might we fix it?" I have to admit an inability to show that it causes problems, though Neil Conway pointed me to some literature[1] on the subject I've not yet had time to go through. My basic idea is that if we have a cross-column frequency count, it will help us get better plans, but I don't know the internals enough to have details further than that. - Josh / eggyknap [1] http://www.cs.umd.edu/~amol/papers/paper-dep.pdf -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers