To take an example, if someone proves that a particular equivalence holds on a simplified data model with no attribute, namespace, comment, or processing instruction nodes, then that is of absolutely no use to someone writing an optimizer for a product in which such nodes can be encountered.
Michael Kay Saxonica > On 27 Jan 2016, at 09:51, Pavel Velikhov <[email protected]> wrote: > > >> On 27 Jan 2016, at 12:37, Christian Grün <[email protected]> wrote: >> >>> Its a common practice for everybody, who needs to come up with formal >>> proofs. You start with the most simplified definitions possible, that >>> capture the essence of the problem. >>> Then you get the skeleton of the proof that is hopefully very simple. Then >>> you can add details back, hoping that the proof remains simple and >>> tractable. >>> >>> So imagine starting the proof while considering all the possible variations >>> of path expressions, all the Infoset stuff, all XML Schema details. I think >>> its hopeless. >> >> I’m completely in line with Michael’s observations. It would obviously >> be nice to have proofs for the full rule sets of the discussed >> languages; but as experience shows, no one will do it (the rare >> exception might prove the rule), so we are stuck with the work that is >> of limited practical use. > > I can’t agree with you. There are a lot of results that automatically hold > for the full specification, even though they are > proved on a clean and easy-to-use subset: undecidability and np-completeness > or np-hardness for instance. > > For the full specifications its sometimes hard to grasp the semantics, so > proving anything serious is impossible. > Example: try proving that SQL-2003 queries are equivalent to some superset of > relational algebra. > _______________________________________________ [email protected] http://x-query.com/mailman/listinfo/talk
