Greg Stark wrote:
This really ought to work, it obviously shouldn't allow you to set a[5] and
then surreptitiously move it to a[1]. But nor should it generate an error,
since I may well have a specific meaning for a[5] and may be planning to fill
in a[1]..a[4] later.

The logical thing to do, I think, is pre-fill the entries a[1]..a[4] with
null. This could be implemented by actually storing the NULLs or else storing
some notation that's used to adjust the base of the index to save space.

I agree. I had always envisioned something exactly like that once we supported NULL elements. As far as the implementation goes, I think it would be very similar to tuples -- a null bitmask that would exist if any elements are NULL.


A related question is how to deal with non-existing array elements. Until now, you could do:

regression=# select f[0] from (select array[1,2]) as t(f);
 f
---

(1 row)

Even though index 0 does not exist, you get a NULL value returned instead of an ERROR. I'd think if we hardwire a lower bound of 1, this should produce an ERROR. Similarly:

regression=# select f[3] from (select array[1,2]) as t(f);
 f
---

(1 row)

Should this produce an ERROR instead of returning NULL once existing array elements can be NULL?

One thing that can't be made to work like it does now is extending the array
on the low end indefinitely:

slo=> update test set a[1] = 1;
UPDATE 1
slo=> update test set a[0] = 0;
UPDATE 1
slo=> update test set a[-1] = -1;
UPDATE 1

Right. In the new world order we're describing, the latter two examples would have to produce errors.


Joe


---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly

Reply via email to