2011/5/6 Tom Lane <t...@sss.pgh.pa.us>: > Robert Haas <robertmh...@gmail.com> writes: >> 2011/5/4 Tom Lane <t...@sss.pgh.pa.us>: >>> Perhaps it would be adequate to allow automatic resolution of an >>> overloading conflict only when one of the available alternatives >>> dominates all others, ie, none of the argument positions requires a >>> "longer distance" cast than is used in that position by any other >>> available alternative. I'm just throwing that out as a possibility, >>> I haven't tried it. > >> That works OK for most things, but there's one case where I think we >> might need a better solution - suppose A is a subtype of B. It's >> fairly common to define a function or operator f(A,A) and f(B,B), and >> to want f(A,B) or f(B,A) to be interpreted as a the latter rather than >> the former. For example, let A=int2, B=int4, f=+. Now, we can (and >> currently do) handle that by just defining all the combinations >> explicitly, but people don't always want to do that. > > That case still works as long as downcasts (int4 -> int2) are either not > allowed to be invoked implicitly at all, or heavily penalized in the > distance assignments.
Not at all works, but heavily penalized doesn't. Suppose A->B has distance 1 and B->A has distance 1000. Then f(A,B) can match f(A,A) with distances (0,1000) or f(B,B) with distances (1,0). If you add up the *total* distance it's easy to say that the latter wins, but if you compare position-by-position as you proposed (and, generally, I agree that's the better route, BTW) then each candidate is superior to the other in one of the two available positions. >>> BTW, not to rain on the parade or anything, but I'll bet that >>> rejiggering anything at all here will result in whining that puts the >>> 8.3-era removal of a few implicit casts to shame. > >> Yeah, I share that fear, which is why I think the idea of generalizing >> typispreferred to an integer has more than no merit: it's less likely >> to break in ways we can't anticipate. > > Well, if you change it to an int and then don't change any of the values > from what they were before, I agree. But then there's no point. > Presumably, the reason we are doing this is so that we can assign some > other preferredness values besides 0/1, and that will change the > behavior. We'd better be damn sure that the new behavior is really > better. Which is why it seems a bit premature to be working on an > implementation when we don't have even a suggestion as to what the > behavioral changes ought to be. Well, sure, to some degree. But if you keep the currently preferred types as having the highest level of preferred-ness in their same categories, then the only effect (I think) will be to make some cases work that don't now; and that's unlikely to break anything too badly. Going to some whole new system will almost inevitably involve more breakage. > Which is why it seems a bit premature to be working on an > implementation when we don't have even a suggestion as to what the > behavioral changes ought to be. I'm in complete agreement on this point. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers