Darren Duncan wrote:
>> I never understood that restriction. I read in the books: "since we
>> have defined things this ways from a formal point of view there's no
>> room for NULL". And my question is well, why don't you change the
>> definitions to augment the datatype sets with a special constant NULL
>> which is by definition not present in any datatype? Wouldn't that
>> give an analogous theory more aligned with real world?
>
> If you want to have a data type which can represent only a single
> value and use it to mean unknown, and all instances of that value are
> equal, then that would be fine.
I think you've missed the concept of NULL.  You seem to be thinking of
the NULL pointer as used in most programming languages.  They use a
constant to indicate an empty pointer and call it NULL.  A NULL is NOT a
specific value, it's the absence of value or information.  Since you
don't know what a particular NULL is, it can be anything.
>
> The main problem with NULL is more how it is used in SQL than the idea
> itself.
>
> For one thing, SQL's NULL violates the logical principle that after
> you say "set foo to bar, then foo equals bar".  With every normal data
> type, if "foo := 1; bar := foo;" then a subsequent comparison of "foo
> = bar" would return true.  But with nulls, if you say "foo := NULL;
> bar:= foo", then a subsequent comparison of "foo = bar" does not
> return true.
>
> More simply, with nulls, saying "foo = foo" will not return true,
> which flies in the face of common sense.
>
> All sorts of other problems in SQL result from that basic situation,
> that no NULL value ever equals itself.
Given what I've said about NULL being unknown, it follows that
NULL!=NULL which is why each NULL is unique in a unique index and you
can only test for it with isnull.  The only problems I've had with NULL
have stemmed from my programmers view of NULL.
>
> But its worse than that, in that SQL isn't even consistent with itself
> in how it treats nulls.  With some kinds of operations or queries, it
> treats every null being unique, and in other situations it treats them
> all as being equal.  No normal data type has this problem.
What situations treat them all as being equal?  Off hand I can't think
of any, but then my SQL experience isn't that broad.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to