(trimmed CC list to evade gmail's spam filter, sorry)

On 21/03/18 08:51, Edmund Horner wrote:
Hi all, I haven't heard anything for a while and so assume you're
beavering away on real features. :)

I've been dogfooding this patch at work, and I am personally pretty
happy with it.

I still think the number of completions on an empty string is a bit
too big, but I don't know what to omit.  There are around 1700
completions on the empty "postgres" database in my testing, and we
show the first 1000 (arbitrarily limited, as the generated SQL has
LIMIT 1000 but no ORDER BY).

Should we just leave it as is?

Whether we improve the filtering or not, I'm optimistic the feature
will be committed in this CF or the next.

I've also been working on adding support for completions after commas,
but I really want to see the current feature finished first.

Playing around with this a little bit, I'm not very satisfied with the completions. Sometimes this completes too much, and sometimes too little. All of this has been mentioned in this and the other thread [1] already, this just my opinion on all this.

Too much:

postgres=# \d lp
                   Table "public.lp"
   Column  |  Type   | Collation | Nullable | Default
----------+---------+-----------+----------+---------
  id       | integer |           |          |
  partkey  | text    |           |          |
  partcol1 | text    |           |          |
  partcol2 | text    |           |          |
Partition key: LIST (partkey)
Number of partitions: 1000 (Use \d+ to list them.)

postgres=# select pa[TAB]
pad_attribute parameter_default parameter_style partattrs partcol2 partexprs partrelid page parameter_mode parameter_types partclass partcollation partkey partstrat pageno parameter_name parse_ident( partcol1 partdefid partnatts passwd

Too little:

postgres=# select partkey, p[TAB]
[no completions]


Then there's the multi-column case, which seems weird (to a user - I know the implementation and understand why):

postgres=# select partkey, partc[TAB]
[no completions]

And I'd love this case, where go back to edit the SELECT list, after already typing the FROM part, to be smarter:

postgres=# select p[TAB] FROM lp;
Display all 370 possibilities? (y or n)

There's something weird going on with system columns, from a user's point of view:

SELECT oi[TAB]
oid              oidvectortypes(

postgres=# select xm[TAB]
xmin xmlcomment( xml_is_well_formed_content( xmlvalidate( xmlagg( xml_is_well_formed( xml_is_well_formed_document(

So oid and xmin are completed. But "xmax" and "ctid" are not. I think this is because in fact none of the system columns are completed, but there happen to be some tables with regular columns named "oid" and "xmin". So it makes sense once you know that, but it's pretty confusing to a user. Tab-completion is a great way for a user to explore what's available, so it's weird that some system columns are effectively completed while others are not.

I'd like to not include set-returning functions from the list. Although you can do "SELECT generate_series(1,10)", I'd like to discourage people from doing that, since using set-returning functions in the target list is a PostgreSQL extension to the SQL standard, and IMHO the "SELECT * FROM generate_series(1,10)" syntax is easier to understand and works more sanely with joins etc. Conversely, it would be really nice to include set-returning function in the completions after FROM.


I understand that there isn't much you can do about some of those things, short of adding a ton of new context information about previous queries and heuristics. I think the completion needs to be smarter to be acceptable. I don't know what exactly to do, but perhaps someone else has ideas.

I'm also worried about performance, especially of the query to get all the column names. It's pretty much guaranteed to do perform a SeqScan+Sort+Unique on pg_attribute. pg_attribute can be very large. In my little test database, with the above 1000-partition table, hitting tab after "SELECT " takes about 1 second, until you get the "Display all 1000 possibilities" prompt. And that's not a particularly large schema.

- Heikki


PS. All the references to "pg_attribute" and other system tables, and functions, need to be schema-qualified, as "pg_catalog.pg_attribute" and so forth.

[1] https://www.postgresql.org/message-id/1328820579.11241.4.ca...@vanquo.pezone.net

Reply via email to