Re: [HACKERS] RETURN QUERY in PL/PgSQL?
On Mon, 2007-04-23 at 17:48 -0400, Tom Lane wrote: I think we've got something isomorphic to that in the patch queue already --- take a look at Pavel's table function patch. It's in need of cleanup but I think it will make it in. Interesting -- I missed that patch, but it seems like a better approach. Are you already reviewing Pavel's patch, or is it something I could take a look at? -Neil ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] RETURN QUERY in PL/PgSQL?
Hello It is RETURN TABLE(SQL) via ANSI SQL 2003 Table function support is in patch queue: http://archives.postgresql.org/pgsql-patches/2007-02/msg00216.php http://momjian.us/mhonarc/patches/msg1.html Regards Pavel Stehule In a PL/PgSQL set-returning function, returning the result set of a query requires a FOR loop and repeated invocations of the RETURN NEXT statement: FOR x in SELECT ... LOOP RETURN NEXT x; END LOOP; This works, but it seems overly verbose. It occurred to me that we could easily add a new PL/PgSQL statement that evaluates a set-returning expression and adds *all* the resulting rows to the function's result set. For example: RETURN QUERY SELECT ...; I'm not sure of the right name: RETURN ROWS or RETURN ALL might also work. Of course, this is syntax sugar (and superficial sugar at that), but I believe this is a fairly common requirement. _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] RETURN QUERY in PL/PgSQL?
Neil Conway [EMAIL PROTECTED] writes: Interesting -- I missed that patch, but it seems like a better approach. Are you already reviewing Pavel's patch, or is it something I could take a look at? The main objection I have is that I don't think changing the definition of pg_proc.proargmodes is a good idea --- that will break some nontrivial amount of client-side code in order to support a distinction that seems unimportant. IMHO anyway. Feel free to take a whack at it. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [BUGS] BUG #3244: problem with PREPARE
On Mon, Apr 23, 2007 at 02:02:04PM -0700, William Lawrance wrote: Our first attempt to use the ECPG prepare interface revealed that ECPG doesn't use the PQlib prepare function. The ECPG prepare replaces any parameters with their values and presents a new SQL statement to the This is true and should also be documented. The reason for this behaviour is simply that ECPG prepare feature was added before the backend had its own prepare feature. And no one changed it so far. There are several difficulties to be encountered when attempting to use this within a program using the ECPG interface. For example, the connection structure for PQlib isn't readily available, and the transaction semantics must be synchronized with ECPG's state. This did work, but it was fairly clumsy. Right, that's what makes it non trivial. Since we wanted to do this in a cleaner manner, and also wished to avoid changing the applications if possible, we used the following approach: Within the execute.c module, we added routines to manage a cache of prepared statements. These routines are able to search, insert, and delete entries in the cache. The key for these cache entries is the text of the SQL statement as passed by ECPG from the application program. Within the same module, we replaced the ECPGexecute function. This is the function that is called to execute a statement after some preliminary housekeeping is done. The original ECPGexecute function constructs an ASCII string by replacing each host variable with its current value and then calling PQexec. The new ECPGexecute function does the following: - build an array of the current values of the host variables. - search the cache for an entry indicating that this statement has already been prepare'd, via PQprepare - If no entry was found in the previous step, call PQprepare for the statement and then insert an entry for it into the cache. If this requires an entry to be re-used, execute a DEALLOCATE PREPARE.. for the previous contents. - At this point, the SQL statement has been prepare'd by PQlib, either when the statement was executed in the past, or in the previous step. - call PQexecPrepared, using the array of parameters built in the first step above. Does this mean you prepare ALL statements? Or where you only talking about statements that are prepared in the application? Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
3) To maintain crash recovery chance and reduce the amount of archive log, removal of unnecessary full page writes from archive logs is a good choice. Definitely, yes. pg_compresslog could even move the full pages written during backup out of WAL and put them in a different file that needs to be applied before replay of the corresponding WAL after a physical restore. This would further help reduce log shipping volume. To do this, we need both logical log and full page writes in WAL. This is only true in the sense, that it allows a less complex implementation of pg_compresslog. Basically a WAL record consists of info about what happened and currently eighter per tuple new data or a full page image. The info of what happened together with the full page image is sufficient to reconstruct the per tuple new data. There might be a few WAL record types (e.g. in btree split ?) where this is not so, but we could eighter fix those or not compress those. This is why I don't like Josh's suggested name of wal_compressable eighter. WAL is compressable eighter way, only pg_compresslog would need to be more complex if you don't turn off the full page optimization. I think a good name would tell that you are turning off an optimization. (thus my wal_fullpage_optimization on/off) Andreas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] tsearch2 in 8.3
A few of us on IRC were wondering what the status of tsearch2 is in 8.3 ? Was it decided to include it in core or did we decide to keep FTS as a plugin? Some brief comments from anyone on the inside of the whole FTS issue would be greatly appreciated by us mere end users. Regards, - Naz. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Fragmentation project
On 4/23/07, Heikki Linnakangas [EMAIL PROTECTED] wrote: Oh, you're talking about distributing partitions across different nodes and parallelizing queries. No, we don't do that today. PL/Proxy actually works like that, only in smaller scope - for function calls only. General solution that partitions free-form SQL will be non-trivial... -- marko ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Re: [BUGS] BUG #3245: PANIC: failed to re-find shared loc k o b j ect
Heikki Linnakangas wrote: Tom Lane wrote: Also, we have a generic issue that making fresh entries in a hashtable might result in a concurrent hash_seq_search scan visiting existing entries more than once; that's definitely not something any of the existing callers are thinking about. Ouch. Note that we can also miss some entries altogether, which is probably even worse. In case someone is wondering how that can happen, here's an example. We're scanning a bucket that contains four entries, and we split it after returning 1: 1 - 2* - 3 - 4 * denotes the next entry the seq scan has stored. If this is split goes example like this: 1 - 3 2* - 4 The seq scan will continue scanning from 2, then 4, and miss 3 altogether. I briefly went through all callers of hash_seq_init. The only place where we explicitly rely on being able to add entries to a hash table while scanning it is in tbm_lossify. There's more complex loops in portalmem.c and relcache.c, which I think are safe, but would need to look closer. There's also the pg_prepared_statement set-returning-function that keeps a scan open across calls, which seems error-prone. Should we document the fact that it's not safe to insert new entries to a hash table while scanning it, and fix the few call sites that do that, or does anyone see a better solution? One alternative would be to inhibit bucket splits while a scan is in progress, but then we'd need to take care to clean up after each scan. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[HACKERS] Google SoC: column-level privilege subsystem
Hi all, I'm one of the Google SoC's students for PostgreSQL. My project is to implement column-level privilege in PG. Here is a description of my project. Any and all help and/or comment is appreciated. Table-level privilege subsystem in PG is now used like this: GRANT { { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES | TRIGGER } [,...] | ALL [ PRIVILEGES ] } ON [ TABLE ] tablename [, ...] TO { username | GROUP groupname | PUBLIC } [, ...] [ WITH GRANT OPTION ] REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES | TRIGGER } [,...] | ALL [ PRIVILEGES ] } ON [ TABLE ] tablename [, ...] FROM { username | GROUP groupname | PUBLIC } [, ...] [ CASCADE | RESTRICT ] According to this and SQL92 standard, I'll define the grammar of column-level privilege as follows: GRANT { { SELECT | INSERT | UPDATE | REFERENCES } [,...] | ALL [ PRIVILEGES ] } ON [ TABLE ] tablename[ '(' columnname [, ...] ')' ] [, ...] TO { username | GROUP groupname | PUBLIC } [, ...] [ WITH GRANT OPTION ] REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | REFERENCES } [,...] | ALL [ PRIVILEGES ] } ON [ TABLE ] tablename[ '(' columnname [, ...] ')' ] [, ...] FROM { username | GROUP groupname | PUBLIC } [, ...] [ CASCADE | RESTRICT ] According to SQL92, if TABLE privilege is granted to a grantee, he/she will also get the privilege of ALL columns added to the table. That is to say, if you commit this command: GRANT SELECT ON student TO Tom; Then Tom can select all columns of the student table. This is also true for UPDATE, INSERT and REFERENCES. So, if Tom can see the name column of the student table but has no right to see other columns, a superuser should commit some commands like these: REVOKE SELECT ON student FROM Tom; GRANT SELECT ON student(name) TO Tom; Here is a plan of my project: 1. Modifying the parser for supporting column-level Grant/Revoke grammar. The grammar is defined as before. This will change gram.y and some relative data structures. 2. Add codes to record column-level privilege information as meta-data in system catalog pg_attribute. This will add a column named 'attacl' in pg_attribute. The format of this column is just the same as 'pg_class.relacl'. 3. Before evaluating a SQL command, check column-level privilege. This is done AFTER checking table-level privilege. As I mentioned before, if table-level privilege is granted, it's not necessary to check column-level privilege. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [pgsql-advocacy] Wild idea: 9.0?
On Tuesday 24 April 2007 01:32, Magnus Hagander wrote: That would be just because you don't know the numbering scheme. 8.2 to 8.3 is considered major in these parts. See http://www.postgresql.org/support/versioning Is that official policy? I don't see any mention of it in the docs. Are you somehow suggesting that our website isn't official? Where did you get that idea? Website information can often be of a transient nature, with no history of changes or even the existence of information. Documentation is a little more permanent, and at least offers a record of agreement at a specific point in time. As for inclusion in the docs I beleive we're still waiting for your patch... We'll see :-) -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] TODO idea - implicit constraints across child tables with a common column as primary key (but obviously not a shared index)
Tom Lane [EMAIL PROTECTED] writes: Gregory Stark [EMAIL PROTECTED] writes: The main data from the statistics that's of interest here are the extreme values of the histogram. If we're not interested in any values in that range then we can exclude the partition entirely. Except that there is *no* guarantee that the histogram includes the extreme values --- to promise that would require ANALYZE to scan every table row. That's why I said: a subsequent VACUUM ANALYZE could mark the resulting statistics as authoritative Not just plain analyze. There's another issue here too. One of the other motivations is to be able to put read-only tables on read-only media. To do that would require freezing every tuple which would at the very least involve looking at every tuple. (It would also involve waiting until all tuples are freezable too.) So there's a natural step in which to gather these authoritative statistics anyways. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [pgsql-advocacy] Wild idea: 9.0?
On Tue, Apr 24, 2007 at 09:18:54AM -0400, Robert Treat wrote: On Tuesday 24 April 2007 01:32, Magnus Hagander wrote: That would be just because you don't know the numbering scheme. 8.2 to 8.3 is considered major in these parts. See http://www.postgresql.org/support/versioning Is that official policy? I don't see any mention of it in the docs. Are you somehow suggesting that our website isn't official? Where did you get that idea? Website information can often be of a transient nature, with no history of changes or even the existence of information. Documentation is a little more permanent, and at least offers a record of agreement at a specific point in time. Well, there is cvs history. But I see your point. Doesn't make it any less official, though, just transient. //Magnus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [BUGS] BUG #3244: problem with PREPARE
Michael Meskes wrote: On Mon, Apr 23, 2007 at 02:02:04PM -0700, William Lawrance wrote: Our first attempt to use the ECPG prepare interface revealed that ECPG doesn't use the PQlib prepare function. The ECPG prepare replaces any parameters with their values and presents a new SQL statement to the This is true and should also be documented. The reason for this behaviour is simply that ECPG prepare feature was added before the backend had its own prepare feature. And no one changed it so far. It is in the TODO: o Use backend PREPARE/EXECUTE facility for ecpg where possible -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [pgsql-advocacy] Wild idea: 9.0?
Magnus Hagander wrote: On Tue, Apr 24, 2007 at 09:18:54AM -0400, Robert Treat wrote: On Tuesday 24 April 2007 01:32, Magnus Hagander wrote: That would be just because you don't know the numbering scheme. 8.2 to 8.3 is considered major in these parts. See http://www.postgresql.org/support/versioning Is that official policy? I don't see any mention of it in the docs. Are you somehow suggesting that our website isn't official? Where did you get that idea? Website information can often be of a transient nature, with no history of changes or even the existence of information. Documentation is a little more permanent, and at least offers a record of agreement at a specific point in time. Well, there is cvs history. But I see your point. Doesn't make it any less official, though, just transient. There is plenty of valid information that is not in the docs. One might just as well ask where did the policy come from that the docs are the only authoritative source of information on policy. ;-) cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] Email signature in release announcement posting
I want to apologize for having my signature at the bottom of the 8.2.4, etc. release announcement. There is no reason to mention my name, company, and web site at the bottom of something that is supposed to come from the community. And I didn't even write that email --- I was doing it for someone else. I also didn't realize my email would be picked up, verbatum, by web sites: http://lwn.net/Articles/231517/ As soon as I saw the posted version I knew the signature looked odd. I will try to remember to remove it in the future if I ever have to make such announcements again. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Email signature in release announcement posting
Bruce Momjian wrote: I want to apologize for having my signature at the bottom of the 8.2.4, etc. release announcement. There is no reason to mention my name, company, and web site at the bottom of something that is supposed to come from the community. And I didn't even write that email --- I was doing it for someone else. I also didn't realize my email would be picked up, verbatum, by web sites: http://lwn.net/Articles/231517/ As soon as I saw the posted version I knew the signature looked odd. I will try to remember to remove it in the future if I ever have to make such announcements again. Well, I at least was not even slightly offended. Some people are way too sensitive in my opinion. cheers andrew (Pastafarian) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] tsearch2 in 8.3
Naz Gassiep wrote: A few of us on IRC were wondering what the status of tsearch2 is in 8.3 ? Was it decided to include it in core or did we decide to keep FTS as a plugin? Some brief comments from anyone on the inside of the whole FTS issue would be greatly appreciated by us mere end users. Regards, The patch is in the patch queue and we will try to get it into 8.3. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] RESET command seems pretty disjointed now
On 4/23/07, Neil Conway [EMAIL PROTECTED] wrote: On Tue, 2007-04-17 at 16:34 +0300, Marko Kreen wrote: Attached patch does following conversions: ISTM it would be cleaner to use an enum to identify the different variants of the DISCARD command, rather than a character string. Is guc.c still the logical place for the implementation of DISCARD? Something under backend/commands might be better, although I don't see a real obvious place for it. The psql tab completion code requires updating for the new DISCARD command. Attached patch addresses all 3 comments. As it will be top-level command, I put code into commands/discard.c -- marko discard_v2.diff.gz Description: GNU Zip compressed data ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [BUGS] BUG #3245: PANIC: failed to re-find shared loc k o b j ect
Heikki Linnakangas [EMAIL PROTECTED] writes: I briefly went through all callers of hash_seq_init. The only place where we explicitly rely on being able to add entries to a hash table while scanning it is in tbm_lossify. There's more complex loops in portalmem.c and relcache.c, which I think are safe, but would need to look closer. There's also the pg_prepared_statement set-returning-function that keeps a scan open across calls, which seems error-prone. The pending-fsync stuff in md.c is also expecting to be able to add entries during a scan. I don't think we can go in the direction of forbidding insertions during a scan --- as the case at hand shows, it's just not always obvious that that could happen, and finding/fixing such a problem is nigh impossible. (We were darn fortunate to be able to reproduce this one.) Plus we have a couple of places where it's really necessary to be able to do it, anyway. The only answer I can see that seems reasonably robust is to change dynahash.c so that it tracks whether any seq_search scans are open on a hashtable, and doesn't carry out any splits while one is. This wouldn't cost anything noticeable in performance, assuming that not very many splits are postponed. The PITA aspect of it is that we'd need to add bookkeeping mechanisms to ensure that the count of active scans gets cleaned up on error exit. It's not like we've not got lots of those, though. Possibly we could simplify matters a bit by not worrying about cleaning up leaked counts at subtransaction abort, ie, the list of open scans would only get forced to empty at top transaction end. This carries a slightly higher risk of meaningful performance degradation, but in practice I doubt it's a big problem. If we agreed that then we'd not need ResourceOwner support --- it could be handled like LWLock counts. pg_prepared_statement is simply broken --- what if the next-to-scan statement is deleted between calls? It'll have to be changed. Comments? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [BUGS] BUG #3244: problem with PREPARE
In the modified version of ECPG that we used for our benchmark, we PREPARE'd all statements. -Original Message- From: Michael Meskes [mailto:[EMAIL PROTECTED] Sent: Tuesday, April 24, 2007 12:45 AM To: William Lawrance Cc: Tom Lane; [EMAIL PROTECTED]; pgsql-hackers@postgresql.org Subject: Re: [HACKERS] [BUGS] BUG #3244: problem with PREPARE On Mon, Apr 23, 2007 at 02:02:04PM -0700, William Lawrance wrote: Our first attempt to use the ECPG prepare interface revealed that ECPG doesn't use the PQlib prepare function. The ECPG prepare replaces any parameters with their values and presents a new SQL statement to the This is true and should also be documented. The reason for this behaviour is simply that ECPG prepare feature was added before the backend had its own prepare feature. And no one changed it so far. There are several difficulties to be encountered when attempting to use this within a program using the ECPG interface. For example, the connection structure for PQlib isn't readily available, and the transaction semantics must be synchronized with ECPG's state. This did work, but it was fairly clumsy. Right, that's what makes it non trivial. Since we wanted to do this in a cleaner manner, and also wished to avoid changing the applications if possible, we used the following approach: Within the execute.c module, we added routines to manage a cache of prepared statements. These routines are able to search, insert, and delete entries in the cache. The key for these cache entries is the text of the SQL statement as passed by ECPG from the application program. Within the same module, we replaced the ECPGexecute function. This is the function that is called to execute a statement after some preliminary housekeeping is done. The original ECPGexecute function constructs an ASCII string by replacing each host variable with its current value and then calling PQexec. The new ECPGexecute function does the following: - build an array of the current values of the host variables. - search the cache for an entry indicating that this statement has already been prepare'd, via PQprepare - If no entry was found in the previous step, call PQprepare for the statement and then insert an entry for it into the cache. If this requires an entry to be re-used, execute a DEALLOCATE PREPARE.. for the previous contents. - At this point, the SQL statement has been prepare'd by PQlib, either when the statement was executed in the past, or in the previous step. - call PQexecPrepared, using the array of parameters built in the first step above. Does this mean you prepare ALL statements? Or where you only talking about statements that are prepared in the application? Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Google SoC: column-level privilege subsystem
Golden Liu wrote: 3. Before evaluating a SQL command, check column-level privilege. This is done AFTER checking table-level privilege. As I mentioned before, if table-level privilege is granted, it's not necessary to check column-level privilege. Golden, this sounds good. I'm just a user. It sounds like table || column is the check, so table implies all of columns. ie, revoking a column permission does nothing unless TABLE permission is also revoked. It also might be nice to specify some of the failure / usage modes. ie, how does SELECT * FROM Students work if I don't have permission to a column. Return all values except for forbidden ones? How does SELECT ForbiddenColumn FROM Students work. For INSERTS, they probably need to fail if you don't have permission to non-null columns. What about columns with default values? Are inserts permitted if you don't have permission to a column with default values? Do you have a project page up somewhere? I wouldn't mind helping with some of the documentation for example. Good luck! - August ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] UNION with more than 2 branches
The resolution to my problem with the select_common_type() error message turned out to be that this doesn't work: postgres=# select null union select null union select 1; ERROR: UNION types text and integer cannot be matched That's because it resolves the first two branches independently, then defaults to text if it can't find anything better, and then tries to match text to the integer in the third branch. This should probably be fixed sometime. Maybe make a note in the TODO list? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Google SoC: column-level privilege subsystem
Am Dienstag, 24. April 2007 14:54 schrieb Golden Liu: Here is a plan of my project: All of that should be pretty straightforward. But have you given any thoughts to how you want to represent the column privileges in the parse tree and how you want to process them in the rewriter? That will be the real difficulty in this project, I think. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] UNION with more than 2 branches
Peter Eisentraut [EMAIL PROTECTED] writes: The resolution to my problem with the select_common_type() error message turned out to be that this doesn't work: postgres=# select null union select null union select 1; ERROR: UNION types text and integer cannot be matched Yeah, this has been noted before. The sticking point is that it's not clear that resolving types across more than two branches at a time is legal per SQL spec. The spec defines UNION as exactly two at a time, ie the above is really (select null union select null) union select 1; and there is not any language that would justify allowing the 1 to determine the data type of the inner UNION. It would not be all that important in a UNION ALL case, maybe, but for UNION the assigned data type determines what values are considered duplicates, and thus can have real impact on the results. Maybe we should just ignore those qualms and do it anyway --- I must admit that I'm hard-pressed to come up with a situation where anyone would really want different datatypes used in the inner union than the outer. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] UNION with more than 2 branches
Tom Lane [EMAIL PROTECTED] writes: Maybe we should just ignore those qualms and do it anyway --- I must admit that I'm hard-pressed to come up with a situation where anyone would really want different datatypes used in the inner union than the outer. Does it even matter except in the case of nulls? I mean, if the inner pair uses integer and then the outer pair uses bigint it'll still work correctly, no? What would happen if the inner pair defaulted null to unknown instead of text? Then the next level would have a chance to union between unknown and integer successfully. It's a bit odd that that's basically what happens currently *except* for in unions: postgres=# create table foo as select null; WARNING: column ?column? has type unknown DETAIL: Proceeding with relation creation anyway. SELECT postgres=# create table bar as select null union all select null; SELECT postgres=# \d foo Table public.foo Column | Type| Modifiers --+---+--- ?column? | unknown | postgres=# \d bar Table public.bar Column | Type | Modifiers --+--+--- ?column? | text | -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] UNION with more than 2 branches
Gregory Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Maybe we should just ignore those qualms and do it anyway --- I must admit that I'm hard-pressed to come up with a situation where anyone would really want different datatypes used in the inner union than the outer. Does it even matter except in the case of nulls? I mean, if the inner pair uses integer and then the outer pair uses bigint it'll still work correctly, no? Oh, it absolutely matters: you can get different answers. Consider (select '1' union select ' 1') union all select 1; Ignoring the point that we have no implicit integer/text cast, this would yield three rows if the inner union is treated as text, vs two rows if it's treated as integer. Likewise, '1.0' is different from '1' according to some datatypes and not others. The urgency of this objection decreases greatly if we get rid of all the implicit cross-type-category casts, I think. Offhand the only trouble case I can come up with without using a cross-category conversion is trailing blanks in char vs text/varchar. What would happen if the inner pair defaulted null to unknown instead of text? You're missing the point, which is that the inner UNION needs to decide what its uniqueness semantics are, independently of what might happen to its result later. Or that's how I read the spec anyway. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
Koichi, Andreas, 1) To deal with partial/inconsisitent write to the data file at crash recovery, we need full page writes at the first modification to pages after each checkpoint. It consumes much of WAL space. We need to find a way around this someday. Other DBs don't do this; it may be becuase they're less durable, or because they fixed the problem. I don't think there should be only one setting. It depend on how database is operated. Leaving wal_add_optiomization_info = off default does not bring any change in WAL and archive log handling. I understand some people may not be happy with additional 3% or so increase in WAL size, especially people who dosn't need archive log at all. So I prefer to leave the default off. Except that, is there any reason to turn this off if we are archiving? Maybe it should just be slaved to archive_command ... if we're not using PITR, it's off, if we are, it's on. 1) is there any throughput benefit for platforms with fast CPU but contrained I/O (e.g. 2-drive webservers)? Any penalty for servers with plentiful I/O? I've only run benchmarks with archive process running, because wal_add_optimization_info=on does not make sense if we don't archive WAL. In this situation, total I/O decreases because writes to archive log decreases. Because of 3% or so increase in WAL size, there will be increase in WAL write, but decrease in archive writes makes it up. Yeah, I was just looking for a way to make this a performance feature. I see now that it can't be. ;-) 3) How is this better than command-line compression for log-shipping? e.g. why do we need it in the database? I don't fully understand what command-line compression means. Simon suggested that this patch can be used with log-shipping and I agree. If we compare compression with gzip or other general purpose compression, compression ratio, CPU usage and I/O by pg_compresslog are all quite better than those in gzip. OK, that answered my question. This is why I don't like Josh's suggested name of wal_compressable eighter. WAL is compressable eighter way, only pg_compresslog would need to be more complex if you don't turn off the full page optimization. I think a good name would tell that you are turning off an optimization. (thus my wal_fullpage_optimization on/off) Well, as a PG hacker I find the name wal_fullpage_optimization quite baffling and I think our general user base will find it even more so. Now that I have Koichi's explanation of the problem, I vote for simply slaving this to the PITR settings and not having a separate option at all. -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
Josh Berkus [EMAIL PROTECTED] writes: Well, as a PG hacker I find the name wal_fullpage_optimization quite baffling and I think our general user base will find it even more so. Now that I have Koichi's explanation of the problem, I vote for simply slaving this to the PITR settings and not having a separate option at all. The way to not have a separate option is to not need one, by having the feature not cost anything extra in the first place. Andreas and I have made the point repeatedly about how to do that. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] UNION with more than 2 branches
Tom Lane [EMAIL PROTECTED] writes: Gregory Stark [EMAIL PROTECTED] writes: Does it even matter except in the case of nulls? I mean, if the inner pair uses integer and then the outer pair uses bigint it'll still work correctly, no? Oh, it absolutely matters: you can get different answers. Consider (select '1' union select ' 1') union all select 1; Ah. What would happen if the inner pair defaulted null to unknown instead of text? You're missing the point, which is that the inner UNION needs to decide what its uniqueness semantics are, independently of what might happen to its result later. Or that's how I read the spec anyway. Ah of course. We wouldn't be able to unionize unknown all. hmph. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] UNION with more than 2 branches
Am Dienstag, 24. April 2007 19:48 schrieb Tom Lane: You're missing the point, which is that the inner UNION needs to decide what its uniqueness semantics are, independently of what might happen to its result later. Or that's how I read the spec anyway. It's probably safer to leave it as is. Maybe there could be a way to make the error message more understandable (Where did the 'text' come from?), but perhaps the additional detail discussed in the other thread will do that. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] RETURN QUERY in PL/PgSQL?
On Tue, 2007-04-24 at 07:58 +0200, Pavel Stehule wrote: It is RETURN TABLE(SQL) via ANSI SQL 2003 I think there are two basically orthogonal features in the patch: the RETURNS TABLE addition to CREATE FUNCTION, and the RETURN TABLE statement in PL/PgSQL. The former is specified by the SQL standard and is applicable to all PLs, while the latter is syntax sugar for PL/PgSQL. I think it would make sense to split the patch into two separate patches, one for each feature. I'm inclined to agree with Tom that adding PROARGMODE_TABLE isn't worth the trouble: making RETURNS TABLE(...) equivalent to RETURNS SETOF RECORD with OUT parameters strikes me as more elegant. I didn't really understand the name collision argument you made earlier[1]; can you elaborate? Another question is how RETURN NEXT and RETURN TABLE should interact (in PL/PgSQL). I think the two sensible choices are to either disallow a function from using both statements (which is what the patch currently does), or allow both statements to be used, and have RETURN TABLE *not* return from the function -- both RETURN TABLE and RETURN NEXT would append results to the function's result tuplestore. The latter seems more flexible. Do we need the extra set of parentheses in RETURN TABLE? To use one of your earlier examples: CREATE FUNCTION fooff(a int) RETURNS TABLE(a int, b int) AS $$ BEGIN RETURN TABLE(SELECT * FROM Foo WHERE x a); END; $$ LANGUAGE plpgsql; RETURN TABLE SELECT ... ; should be sufficient to allow correct parsing, and is more consistent with the lack of parentheses in the other RETURN variants. -Neil [1] http://archives.postgresql.org/pgsql-patches/2007-04/msg00311.php ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Email signature in release announcement posting
Bruce, All, (x-posting to Advocacy, hopefully to forestall flames) I want to apologize for having my signature at the bottom of the 8.2.4, etc. release announcement. There is no reason to mention my name, company, and web site at the bottom of something that is supposed to come from the community. And I didn't even write that email --- I was doing it for someone else. I also didn't realize my email would be picked up, verbatum, by web sites: http://lwn.net/Articles/231517/ As soon as I saw the posted version I knew the signature looked odd. I will try to remember to remove it in the future if I ever have to make such announcements again. Gah. Well, we learn something every time. I think we should create a [EMAIL PROTECTED] account for you just to post official notices; this will also prevent you from accidentally using the wrong sig line (depending on MTA); it's what I do. -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Email signature in release announcement posting
Josh Berkus wrote: Bruce, All, (x-posting to Advocacy, hopefully to forestall flames) I want to apologize for having my signature at the bottom of the 8.2.4, etc. release announcement. There is no reason to mention my name, company, and web site at the bottom of something that is supposed to come from the community. And I didn't even write that email --- I was doing it for someone else. I also didn't realize my email would be picked up, verbatum, by web sites: http://lwn.net/Articles/231517/ As soon as I saw the posted version I knew the signature looked odd. I will try to remember to remove it in the future if I ever have to make such announcements again. Gah. Well, we learn something every time. I think we should create a [EMAIL PROTECTED] account for you just to post official notices; this will I am [EMAIL PROTECTED] Yea, I could use that to post. Never even thought of that. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Re: [BUGS] BUG #3245: PANIC: failed to re-find shared loc k o b j ect
Tom Lane wrote: The pending-fsync stuff in md.c is also expecting to be able to add entries during a scan. No, mdsync starts the scan from scratch after calling AbsorbFsyncRequests. I don't think we can go in the direction of forbidding insertions during a scan --- as the case at hand shows, it's just not always obvious that that could happen, and finding/fixing such a problem is nigh impossible. (We were darn fortunate to be able to reproduce this one.) Plus we have a couple of places where it's really necessary to be able to do it, anyway. The only answer I can see that seems reasonably robust is to change dynahash.c so that it tracks whether any seq_search scans are open on a hashtable, and doesn't carry out any splits while one is. This wouldn't cost anything noticeable in performance, assuming that not very many splits are postponed. The PITA aspect of it is that we'd need to add bookkeeping mechanisms to ensure that the count of active scans gets cleaned up on error exit. It's not like we've not got lots of those, though. We could have two kinds of seq scans, with and without support for concurrent inserts. If you open a scan without that support, it acts just like today, and no extra bookkeeping or clean up by the caller is required. If you need concurrent inserts, we inhibit bucket splits, but it's up to the caller to explicitly close the scan, possibly with PG_TRY/CATCH. I'm not sure if that's simpler in the end, but we could get away without adding generic bookkeeping mechanism. Possibly we could simplify matters a bit by not worrying about cleaning up leaked counts at subtransaction abort, ie, the list of open scans would only get forced to empty at top transaction end. This carries a slightly higher risk of meaningful performance degradation, but in practice I doubt it's a big problem. If we agreed that then we'd not need ResourceOwner support --- it could be handled like LWLock counts. Hmm. Unlike lwlocks, hash tables can live in different memory contexts, so we can't just have list of open scans similar to held_lwlocks array. Do we need to support multiple simultaneous seq scans of a hash table? I suppose we do.. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [BUGS] BUG #3245: PANIC: failed to re-find shared loc k o b j ect
Heikki Linnakangas [EMAIL PROTECTED] writes: Tom Lane wrote: The pending-fsync stuff in md.c is also expecting to be able to add entries during a scan. No, mdsync starts the scan from scratch after calling AbsorbFsyncRequests. That was last month ;-). It doesn't restart any more. We could have two kinds of seq scans, with and without support for concurrent inserts. Yeah, I considered that too, but it just seems too error-prone. We could maybe make it trustworthy by having hash_seq_search complain if it noticed there had been any concurrent insertions --- but then you're putting new overhead into hash_seq_search, which kind of defeats the argument for it (and hash_seq_search is a bit of a bottleneck, so extra cycles there matter). Hmm. Unlike lwlocks, hash tables can live in different memory contexts, so we can't just have list of open scans similar to held_lwlocks array. I had first thought about adding a scan counter to the hashtable control struct, but the prospect of hash tables being deallocated while the central list still has references to them seems pretty scary --- we could find ourselves clobbering some other data structure entirely when we go to decrement the count. What seems better now is to have an array or list of HTAB pointers, one for each active scan (so the same hashtable might appear in the list multiple times). When we are considering whether to split, we have to look through the list to see if our table is listed. The list is unlikely to be long so this shouldn't affect performance. If a hash table is deallocated while we still think it has an active scan, nothing very bad happens. The absolute worst possible consequence is if some new hash table gets allocated at exactly the same spot; we'd inhibit splits on it, which still doesn't break correctness though it might kill performance. In any case we can have checking code that complains about leaked scan pointers at transaction end, so any such bug shouldn't survive long. For shared hash tables, this design only works for scans being done by the same backend doing insertion; but locking considerations would probably require that no other backend inserts while we scan anyway (you'd need something much more complicated than shared/exclusive locks to manage it otherwise). regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] row-level stats and last analyze time
[ CC'ing -hackers ] On Sun, 2007-04-22 at 16:10 +0200, Guillaume Lelarge wrote: This patch adds a sentence on monitoring.sgml explaining that stats_row_level needs to be enabled if user wants to get last vacuum/analyze execution time. This behavior was introduced in r1.120 of postmaster/pgstat.c: Modify pgstats code to reduce performance penalties from oversized stats data files: avoid creating stats hashtable entries for tables that aren't being touched except by vacuum/analyze [...] which included other modifications to reduce the pgstat I/O volume in 8.1. I don't think this particular change was wise: the reduction in pgstat volume is pretty marginal, and it is counter-intuitive for stats_row_level to effect whether the last ANALYZE / VACUUM is recorded. (Plus, the optimization is not even enabled with the default postgresql.conf settings.) -Neil ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [JDBC] JDBC driver reports a protocol error for a CVS HEAD server
On Tue, 24 Apr 2007, Ned T. Crigler wrote: The JDBC driver does not connect successfully to a CVS HEAD server (updated as of today) when using a database set to UTF8 encoding; enabling the debug messages in the driver shows that it is sending client_encoding=UNICODE in the startup packet, but the server is reporting client_encoding=UTF8 back to the driver, causing the driver to complain: Indeed, referring to -hackers as this is a recent backend change. The backend is reporting the database encoding as the client encoding. When a connection is created with a startup packet sending client_encoding = UNICODE the server responds with a ParameterStatus message of client_encoding = server encoding. So something has gone wrong here. It's not just a UNICODE/UTF-8 problem as I see the server responding with LATIN1 with a LATIN1 database. Kris Jurka ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Google SoC: column-level privilege subsystem
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of August Zajonc Sent: dinsdag 24 april 2007 18:34 To: Golden Liu Cc: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] Google SoC: column-level privilege subsystem Golden Liu wrote: 3. Before evaluating a SQL command, check column-level privilege. This is done AFTER checking table-level privilege. As I mentioned before, if table-level privilege is granted, it's not necessary to check column-level privilege. Golden, this sounds good. I'm just a user. This is already fiddling on several very preliminary details on what coming up. Since they are already presented, these are my ideas about them. The addition of column-specific priviledges is a good step imho. It sounds like table || column is the check, so table implies all of columns. ie, revoking a column permission does nothing unless TABLE permission is also revoked. IF this will be implemented as suggested here, it will become extremely counter-intuitive. Its just like you have access to a file if you have (explicitly been granted) access to the file OR to its constaining directory (thus sort of implicit). My strongly opinion is that, REVOKE column-level priviledge should revoke access to that column, in effect it should reduce the table-level grant to column-level grants. Alternatively, if I grant myself rights to all the columns that is now different than 'ALL' columns? Perhaps some other thoughts... It also might be nice to specify some of the failure / usage modes. ie, how does SELECT * FROM Students work if I don't have permission to a column. Return all values except for forbidden ones? How does SELECT ForbiddenColumn FROM Students work. Since * means all columns, this is what you (usually) want, or at least should expect to get. Other options might confuse users by having disappearing columns and strange errors in effect. Again, my intuitive idea about it. A more interesting example might be, what if you request rows you don't have access to, but the optimizer can get rid of them (pointing to a badly written query usually) e.g.: SELECT x FROM (SELECT * FROM T); Should this constitute an error if I don't have access to column T.y? In my opinions its perfectly legal, and desirable, to let it fail. For INSERTS, they probably need to fail if you don't have permission to non-null columns. What about columns with default values? Are inserts permitted if you don't have permission to a column with default values? You are suggesting different behaviour for different 'defaults'. 1) If the default is not given (i.e. its actually NULL) than put in NULL. 2) If the default is explicitly given (NULL or a value) than fail? I would really suggest using a single one: If the user has no access and tries to specify it should fail. Otherwise take the current behaviour of the system. Just my EUR 0,02... Good luck with the project. - Joris Dobbelsteen ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Google SoC: column-level privilege subsystem
It sounds like table || column is the check, so table implies all of columns. ie, revoking a column permission does nothing unless TABLE permission is also revoked. IF this will be implemented as suggested here, it will become extremely counter-intuitive. Its just like you have access to a file if you have (explicitly been granted) access to the file OR to its constaining directory (thus sort of implicit). My strongly opinion is that, REVOKE column-level priviledge should revoke access to that column, in effect it should reduce the table-level grant to column-level grants. I think this causes problems when columns are added to the table. If table X has columns A, B, C, and D, and the user has access to A-C but not D, then what happens when column E is added? Logically, if the user started with table access and column D was revoked, then they should have access to column E. But if they started with access to nothing and were explicitly granted A-C, then they shouldn't. You could solve this by having explicit positive and negative ACLs, i.e. your permissions for a particular column are: [table permissions] + [positive column permissions] - [negative column permissions] However, this is both more complicated and possibly imposes a considerably larger performance penalty than the proposed design. In the proposed design, if the appropriate table permissions are granted (presumably the common case), we don't even need to look at the column permissions. But with this design, we need to check every column for negative permissions (unless we cache something at the table level that tells us whether any per-column permissions exist). You'll also need a more complicated grant/revoke syntax so that you can add a positive permission, add a negative permission, or eliminate the per-column setting entirely (whereas in the proposed design grant and revoke are logically opposites of each other, that's not the case here). ...Robert ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [DOCS] row-level stats and last analyze time
On Tuesday 24 April 2007 17:38, Neil Conway wrote: [ CC'ing -hackers ] On Sun, 2007-04-22 at 16:10 +0200, Guillaume Lelarge wrote: This patch adds a sentence on monitoring.sgml explaining that stats_row_level needs to be enabled if user wants to get last vacuum/analyze execution time. This behavior was introduced in r1.120 of postmaster/pgstat.c: Modify pgstats code to reduce performance penalties from oversized stats data files: avoid creating stats hashtable entries for tables that aren't being touched except by vacuum/analyze [...] which included other modifications to reduce the pgstat I/O volume in 8.1. I don't think this particular change was wise: the reduction in pgstat volume is pretty marginal, and it is counter-intuitive for stats_row_level to effect whether the last ANALYZE / VACUUM is recorded. (Plus, the optimization is not even enabled with the default postgresql.conf settings.) +1 -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Google SoC: column-level privilege subsystem
Dear August Thank you for your reply. On 4/25/07, August Zajonc [EMAIL PROTECTED] wrote: Golden Liu wrote: 3. Before evaluating a SQL command, check column-level privilege. This is done AFTER checking table-level privilege. As I mentioned before, if table-level privilege is granted, it's not necessary to check column-level privilege. Golden, this sounds good. I'm just a user. It sounds like table || column is the check, so table implies all of columns. ie, revoking a column permission does nothing unless TABLE permission is also revoked. It also might be nice to specify some of the failure / usage modes. ie, how does SELECT * FROM Students work if I don't have permission to a column. Return all values except for forbidden ones? How does SELECT ForbiddenColumn FROM Students work. For SELECT * FROM Students, I think this will just raise an error. In PG, if you commit a command like SELECT * FROM T1, T2 but do not have permission to T2, PG will raise an error. For column, we should do the same thing. SELECT ForbiddenColumn FROM Students will raise an error too. For INSERTS, they probably need to fail if you don't have permission to non-null columns. What about columns with default values? Are inserts permitted if you don't have permission to a column with default values? For INSERTS, privilege check will just do on columns specified. For table T with two columns, say C1 and C2, and C2 has a default value. If you just have INSERT permission on C1, this will be right: INSERT INTO T(C1) VALUES (V1) since you just specified C1. But this will raise an error: INSERT INTO T VALUES (V1, default) since you specified C2 which you do not have permission to insert into. Do you have a project page up somewhere? I wouldn't mind helping with some of the documentation for example. Good luck! - August Golden ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Google SoC: column-level privilege subsystem
Robert Haas wrote: My strongly opinion is that, REVOKE column-level priviledge should revoke access to that column, in effect it should reduce the table-level grant to column-level grants. I think this causes problems when columns are added to the table. If table X has columns A, B, C, and D, and the user has access to A-C but not D, then what happens when column E is added? Logically, if the user started with table access and column D was revoked, then they should have access to column E. But if they started with access to nothing and were explicitly granted A-C, then they shouldn't. You could solve this by having explicit positive and negative ACLs, i.e. your permissions for a particular column are: Agreed I think. While the explicit Table + Positive - Negative is perhaps ideal and the most intuitive (and I think it is) for now I suppose my point was simply that this should be clearly documented. On the question of SELECT raising an error vs omitting the rows, it clearly needs to raise an error in some cases (INSERTS, and FK situations, where there is no permission for the FK). I suppose it'll need to raise an error on the Select * case as well. For systems behind glue to the database, raising errors in these situations will cause some pain as usually the systems do a lot of inspection of the database. I don't think pg_attribute is MVCC safe? As more stuff is added to pg_attribute eventually that risks causing some problems doesn't it? - August ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] strange buildfarm failures
two of my buildfarm members had different but pretty weird looking failures lately: http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=quaggadt=2007-04-25%2002:03:03 and http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=emudt=2007-04-24%2014:35:02 any ideas on what might causing those ? Stefan ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] RETURN QUERY in PL/PgSQL?
I think there are two basically orthogonal features in the patch: the RETURNS TABLE addition to CREATE FUNCTION, and the RETURN TABLE statement in PL/PgSQL. The former is specified by the SQL standard and is applicable to all PLs, while the latter is syntax sugar for PL/PgSQL. I think it would make sense to split the patch into two separate patches, one for each feature. it is good idea. I'm inclined to agree with Tom that adding PROARGMODE_TABLE isn't worth the trouble: making RETURNS TABLE(...) equivalent to RETURNS SETOF RECORD with OUT parameters strikes me as more elegant. I didn't really understand the name collision argument you made earlier[1]; can you elaborate? for me RETURNS TABLE (a,b) isn't equialent for (OUT a, OUT b) RETURNS SETOF RECORD, but it's eq. for RETURNS SETOF RECORD ... and SELECT FROM foo() AS (a, b). Reason: example: I have table with attr. cust_id, and I want to use parametrized view (table function) where I want to have attr cust_id on output. Variant a) Tom proposal -- because _cust_id is variable CREATE OR REPLACE FUNCTION foo(arg int) RETURNS TABLE (_cust_id int) AS $$ BEGIN RETURN TABLE (SELECT cust_id FROM tab WHERE some = arg); END; $$ LANGUAGE plpgsql; SELECT * FROM foo(1) as (cust_id); Variant b) My proposal -- cust_id isn't variable CREATE OR REPLACE FUNCTION foo(arg int) RETURNS TABLE (cust_id int) AS $$ BEGIN RETURN TABLE (SELECT cust_id FROM tab WHERE some = arg); END; $$ LANGUAGE plpgsql; SELECT * FROM foo(1); Next argument. I would to use this for SQL/PSM. I didn't find any notice about equality between attributies from RETURNS TABLE clause and OUT variables. If you have TABLE function (RETURNS TABLE) you have to use table expression .. RETURN TABLE(SELECT ...) SQL/PSM doesn't know RETURN NEXT, and if I have accept your argument, then I will be in problems with some implicit variables. I need information, where attribute was used. How plpgsql use variable it is different question. If you want, use table attributes like out variables. plpgsql isn't standardised and then it isn't too important. SQL/PSM is defined, and there is important to difference between TABLE attributies and OUT variables. Another question is how RETURN NEXT and RETURN TABLE should interact (in PL/PgSQL). I think the two sensible choices are to either disallow a function from using both statements (which is what the patch currently does), or allow both statements to be used, and have RETURN TABLE *not* return from the function -- both RETURN TABLE and RETURN NEXT would append results to the function's result tuplestore. The latter seems more flexible. RETURN TABLE is specified in std, and it's last statement. SQL/PSM knows it, and it can be source of problems for beginers in future. Maybe .. RETURN NEXT TABLE Do we need the extra set of parentheses in RETURN TABLE? To use one of your earlier examples: CREATE FUNCTION fooff(a int) RETURNS TABLE(a int, b int) AS $$ BEGIN RETURN TABLE(SELECT * FROM Foo WHERE x a); END; $$ LANGUAGE plpgsql; RETURN TABLE SELECT ... ; should be sufficient to allow correct parsing, and is more consistent with the lack of parentheses in the other RETURN variants. again. std need it, but plpgsql isn't sql/psm language. And it is true, lack of parentheses is more consistent with other plpgsql constructs (not only RETURN statement). -Neil [1] http://archives.postgresql.org/pgsql-patches/2007-04/msg00311.php _ Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci. http://messenger.msn.cz/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] [JDBC] JDBC driver reports a protocol error for a CVS HEAD server
Kris Jurka [EMAIL PROTECTED] writes: On Tue, 24 Apr 2007, Ned T. Crigler wrote: The JDBC driver does not connect successfully to a CVS HEAD server (updated as of today) when using a database set to UTF8 encoding; Indeed, referring to -hackers as this is a recent backend change. The backend is reporting the database encoding as the client encoding. Confirmed here: psql has stopped obeying the PGCLIENTENCODING environment variable. Can't help suspecting that the recent GUC-parameter-rollback change broke it. (That has got to be one of the most snakebit projects we've had ... apparently the latest try needs reverted, again.) regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Google SoC: column-level privilege subsystem
Robert Haas [EMAIL PROTECTED] writes: ... IF this will be implemented as suggested here, it will become extremely counter-intuitive. ... You could solve this by having explicit positive and negative ACLs, i.e. your permissions for a particular column are: Uh, wait a moment, people. The proposed project is to implement a capability that is fully, 100% specified by the SQL standard. There is zero scope for API invention here. You read the spec, you do what it says. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Google SoC: column-level privilege subsystem
Tom Lane wrote: Robert Haas [EMAIL PROTECTED] writes: ... IF this will be implemented as suggested here, it will become extremely counter-intuitive. ... You could solve this by having explicit positive and negative ACLs, i.e. your permissions for a particular column are: Uh, wait a moment, people. The proposed project is to implement a capability that is fully, 100% specified by the SQL standard. There is zero scope for API invention here. You read the spec, you do what it says. I did read the spec. My suggestion still stands. Because this is a non-standard construct in the security world (which generally does when combining attributes) the fact that revoking permissions on a column does nothing unless table exist deserves being documented. I couldn't find the detail on the rest in the spec (what section is that in?) but I know Oracle allows inserts to happen if the columns without privilege are null or have a default value. Am I missing something obvious in the spec that describes this explicitly? - August ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match