Re: [PATCHES] Correct the spelling of SYMMETRIC
On Fri, Apr 06, 2007 at 11:34:39PM -0400, Bruce Momjian wrote: Patch applied. Thanks. Your documentation changes can be viewed in five minutes using links on the developer's page, Thanks. 8.1 and 8.2 have the same typo -- any reason not to backpatch this? -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] Correct the spelling of SYMMETRIC
Michael Fuhr wrote: On Fri, Apr 06, 2007 at 11:34:39PM -0400, Bruce Momjian wrote: Patch applied. Thanks. Your documentation changes can be viewed in five minutes using links on the developer's page, Thanks. 8.1 and 8.2 have the same typo -- any reason not to backpatch this? OK, fixed in 8.2.X. We don't patch documentation past the most recent major release. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] Heap page diagnostic/test functions (v2)
This looks useful, but shouldn't it be part of /contrib/pgstattuple rather than in the backend? --- Simon Riggs wrote: New functions to examine the contents of heap pages, as discussed recently on -hackers. These are fully integrated into backend. Designed to be extended for other page layouts/contents. (Heikki has some similar functions for index pages). Docs included, applies cleanly, tests good. I'll maintain this with immediate fixes/additions as we go up to 8.3 and beyond, to assist review process of various patches that alter page contents. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com [ Attachment, skipping... ] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] simply custom variables protection
Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Regards Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ [ Attachment, skipping... ] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PATCHES] Add usage counts to pg_buffercache
Patch applied. Thanks. --- Greg Smith wrote: This patch adds the usage count statistic to the information available in contrib/pgbuffercache. Earlier this month a discussion about my first attempt to instrument the background writer had Tom asking for details about the usage histogram I was seeing, and this patch proved to be the easiest way I found to take a look at that. In situations where one is trying to optimize the background writer, it's very hard to adjust how much to rely on the LRU writer versus the one that writes everything unless you know whether your dirty buffers are typically used heavily (like index blocks) or not (like new INSERT data). Some statistics about the usage counts in your buffer cache are extremely helpful in making that decision. I'll even pass along an ugly but fun query that utilizes this. The following will give you a summary of your buffer cache broken into 32 sections. Each line shows the average usage count of that section, as a positive number if most buffers dirty and a negative one if most are clean. If you refresh this frequently enough, you can actually watch things like how checkpoints move through the buffer cache: SELECT current_timestamp, -- Split into 32 bins of data round(bufferid / (cast((select setting from pg_settings where name='shared_buffers') as int) / (32 - 1.0))) as section, round( -- Average usage count, capped at 5 case when avg(usagecount)5 then 5 else avg(usagecount) end * -- -1 when the majority are clean records, 1 when most are dirty (case when sum(case when isdirty then 1 else -1 end)0 then 1 else -1 end)) as color_intensity FROM pg_buffercache GROUP BY round(bufferid / (cast((select setting from pg_settings where name='shared_buffers') as int) / (32 - 1.0))); The 32 can be changed to anything, that's just what fits on my screen. The main idea of the above is that if you dump all this to a file regularly, it's possible to produce a graph of it showing how the cache has changed over time by assigning a different color intensity based on the usage count--at a massive cost in overhead, of course. I'll be passing along all that code once I get it ready for other people to use. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD Content-Description: [ Attachment, skipping... ] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] simply custom variables protection
Bruce Momjian wrote: Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Hasn't Tom already objected to this patch? cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] simply custom variables protection
Andrew Dunstan wrote: Bruce Momjian wrote: Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Hasn't Tom already objected to this patch? Yes, but the author has not replied, so I am giving the author a chance to justify the patch. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] simply custom variables protection
Hello Bruce My patch allows to allert somebody so any custom variable is protected. I dont understand Tom's arguments. Probably this patch do more than is necessary. Really important for protection is only calling ResetPGVariable() function. My funcionality has only information value. Regards Pavel Stehule From: Bruce Momjian [EMAIL PROTECTED] To: Pavel Stehule [EMAIL PROTECTED] CC: pgsql-patches@postgresql.org, [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: [PATCHES] simply custom variables protection Date: Sat, 7 Apr 2007 11:54:13 -0400 (EDT) Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Regards Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ [ Attachment, skipping... ] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] simply custom variables protection
Pavel Stehule wrote: Hello Bruce My patch allows to allert somebody so any custom variable is protected. I dont understand Tom's arguments. Probably this patch do more than is necessary. Really important for protection is only calling ResetPGVariable() function. My funcionality has only information value. How does a user protect a custom variable using your code? I don't see any API that would allow that. --- Regards Pavel Stehule From: Bruce Momjian [EMAIL PROTECTED] To: Pavel Stehule [EMAIL PROTECTED] CC: pgsql-patches@postgresql.org, [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: [PATCHES] simply custom variables protection Date: Sat, 7 Apr 2007 11:54:13 -0400 (EDT) Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Regards Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ [ Attachment, skipping... ] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Optimized pgbench for 8.3
On 4/6/07, Tatsuo Ishii [EMAIL PROTECTED] wrote: BTW, is anybody working on enabling the fill factor to the tables used by pgbench? 8.3 will introduce HOT, and I think adding the feature will make it easier to test HOT. Please see if the attached patch looks good. It adds a new -F option which can be used to set fillfactor for tellers, accounts and branches tables. Default is 100 and anything between 10 and 100 is acceptable. Thanks, Pavan -- EnterpriseDB http://www.enterprisedb.com pgbench_fillfactor.patch Description: Binary data ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] simply custom variables protection
Bruce Momjian [EMAIL PROTECTED] writes: Pavel Stehule wrote: My patch allows to allert somebody so any custom variable is protected. How does a user protect a custom variable using your code? I don't see any API that would allow that. The call would have to come from the loadable library that defines the custom variable. However, the complaint I had was that we already have an API that should be able to do this, namely setting a protection level higher than PGC_USERSET in the DefineCustomVariable call. That doesn't work today, but the right answer is to make it work, not invent more functions. This was agreed to be the right approach some time ago, see thread here: http://archives.postgresql.org/pgsql-hackers/2006-11/msg00911.php Pavel's proposed patch complicates the API and the code, and offers only part of the same functionality, ie, the equivalent of PGC_SUSET; but I think that for example PGC_SIGHUP is a perfectly reasonable setting to want to use with a custom variable. Furthermore I believe the patch is incomplete/wrong, because it adds only one check on the armored flag, whereas PGC_SUSET affects behavior in a number of places. I also notice that it will make setting of a an armored custom variable from postgresql.conf fail outright in non-superuser sessions, which is surely not desirable. In short: this isn't a feature, it's a wart. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] simply custom variables protection
How does a user protect a custom variable using your code? I don't see any API that would allow that. Every module is responsibile for protectiong own custom variables. Only module knows if some variable needs protection. And after module inicialisation module can call ArmorCustomVariable function. From this moment only superuser can modify this custom variable. If it call ResetPGVariable() function before then default value is protected. It's question if test for superuser is necessery, I hope so it's usefull and I have posibility write security definer function where I can safely modify custom variables. --- Regards Pavel Stehule From: Bruce Momjian [EMAIL PROTECTED] To: Pavel Stehule [EMAIL PROTECTED] CC: pgsql-patches@postgresql.org, [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: [PATCHES] simply custom variables protection Date: Sat, 7 Apr 2007 11:54:13 -0400 (EDT) Pavel, would you remind me how this is useful? --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Regards Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ [ Attachment, skipping... ] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] LIMIT/SORT optimization
I did some performance testing of the patch, and the results were good. I did this: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; test= SELECT * FROM test ORDER BY x LIMIT 3; and the results where, before the patch, for three runs: LOG: duration: 1753.518 ms statement: select * from test order by x limit 3; LOG: duration: 1766.019 ms statement: select * from test order by x limit 3; LOG: duration: 1777.520 ms statement: select * from test order by x limit 3; and after the patch: LOG: duration: 449.649 ms statement: select * from test order by x limit 3; LOG: duration: 443.450 ms statement: select * from test order by x limit 3; LOG: duration: 443.086 ms statement: select * from test order by x limit 3; --- Gregory Stark wrote: Updated patch attached: 1) Removes #if 0 optimizations 2) Changes #if 0 to #if NOT_USED for code that's there for completeness and to keep the code self-documenting purposes rather but isn't needed by anything currently 3) Fixed cost model to represent bounded sorts [ Attachment, skipping... ] Gregory Stark [EMAIL PROTECTED] writes: Heikki Linnakangas [EMAIL PROTECTED] writes: There's a few blocks of code surrounded with #if 0 - #endif. Are those just leftovers that should be removed, or are things that still need to finished and enabled? Uhm, I don't remember, will go look, thanks. Ok, they were left over code from an optimization that I decided wasn't very important to pursue. The code that was ifdef'd out detected when disk sorts could abort a disk sort merge because it had already generated enough tuples for to satisfy the limit. But I never wrote the code to actually abort the run and it looks a bit tricky. In any case the disk sort use case is extremely narrow, you would need something like LIMIT 5 or more to do it and it would have to be a an input table huge enough to cause multiple rounds of merges. I think I've figured out how to adjust the cost model. It turns out that it doesn't usually matter whether the cost model is correct since any case where the optimization kicks in is a case you're reading a small portion of the input so it's a case where an index would be *much* better if available. So the only times the optimization is used is when there's no index available. Nonetheless it's nice to get the estimates right so that higher levels in the plan get reasonable values. I think I figured out how to do the cost model. At least the results are reasonable. I'm not sure if I've done it the right way though. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] simply custom variables protection
Furthermore I believe the patch is incomplete/wrong, because it adds only one check on the armored flag, whereas PGC_SUSET affects behavior in a number of places. I also notice that it will make setting of a an armored custom variable from postgresql.conf fail outright in non-superuser sessions, which is surely not desirable. I don't protect this patch. I didn't understand original proposal well. Tom, I don't understand your last notice. Can you explain it, please. Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] LIMIT/SORT optimization
Bruce Momjian [EMAIL PROTECTED] writes: I did some performance testing of the patch, and the results were good. I did this: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; test= SELECT * FROM test ORDER BY x LIMIT 3; LIMIT 3 seems an awfully favorable case; if the patch can only manage a factor of 4 speedup there, what happens at limit 10, 20, 100? Also, you've tested only one sort size and only one (unspecified) value of work_mem, and the usefulness of the patch would surely vary depending on that. In particular, what happens with a LIMIT large enough to overflow work_mem? Lastly, I suspect that sorting presorted input might be particularly favorable for this patch. Please try it with random data for comparison. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] RESET SESSION v2
On Tue, 2007-04-03 at 10:15 +0300, Marko Kreen wrote: New commands: CLOSE ALL -- close all cursors DEALLOCATE ALL -- close all prepared stmts RESET PLANS-- drop all plans RESET TEMP | TEMPORARY -- drop all temp tables RESET SESSION -- drop/close/free everything + void + ResetTempTableNamespace(void) + { + charnamespaceName[NAMEDATALEN]; + Oid namespaceId; + + /* If not allowed to create, no point proceeding */ + if (pg_database_aclcheck(MyDatabaseId, GetUserId(), +ACL_CREATE_TEMP) != ACLCHECK_OK) + return; ISTM this is buggy: if the user's TEMPORARY privilege is revoked between the time that they create a temporary table and when they execute RESET SESSION, the temporary table won't be cleaned up. * RESET SESSION does not ABORT anymore, instead fails if in transaction. I think it's quite bizarre to have RESET SESSION fail if used in a transaction, but to allow an equivalent sequence of commands to be executed by hand inside a transaction. guc.c is missing some #includes. * DEALLOCATE PREPARE ALL gives bison conflicts. Is that even needed? Seems best to have it, for the sake of consistency. The shift/reduce conflict is easy to workaround, provided you're content to duplicate the body of the DEALLOCATE ALL rule -- e.g. see the attached incremental diff. * Are the CommandComplete changes needed? Seems warranted to me. BTW, why is CLOSE's command tag CLOSE CURSOR, not CLOSE? That seems needlessly verbose, and inconsistent with other commands (e.g. DEALLOCATE). * ResetPlanCache() is implemented as PlanCacheCallback((Datum)0, InvalidOid); That seems to leave plans for utility commands untouched. Is it problem? Yes, I'd think you'd also want to cleanup plans for utility commands. -Neil diff -u src/backend/parser/gram.y src/backend/parser/gram.y --- src/backend/parser/gram.y 3 Apr 2007 07:09:31 - +++ src/backend/parser/gram.y 7 Apr 2007 20:14:48 - @@ -5596,6 +5596,12 @@ n-name = NULL; $$ = (Node *) n; } +| DEALLOCATE PREPARE ALL + { + DeallocateStmt *n = makeNode(DeallocateStmt); + n-name = NULL; + $$ = (Node *) n; + } ; /* ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] Heap page diagnostic/test functions (v2)
On Sat, 2007-04-07 at 11:53 -0400, Bruce Momjian wrote: This looks useful, but shouldn't it be part of /contrib/pgstattuple rather than in the backend? Well, this was written with a view to it being usable for writing test cases that checked the various tuple states as we went. It was originally proposed when Tom asked How will we test HOT? (concurrent psql is the other half of the required base functionality to write sensible test cases). If we see it as a manual test tool only, contrib is OK. But it has to be in the backend to be usable in the regression test suite, so thats where it was designed to go. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] Heap page diagnostic/test functions (v2)
Simon Riggs wrote: On Sat, 2007-04-07 at 11:53 -0400, Bruce Momjian wrote: This looks useful, but shouldn't it be part of /contrib/pgstattuple rather than in the backend? Well, this was written with a view to it being usable for writing test cases that checked the various tuple states as we went. It was originally proposed when Tom asked How will we test HOT? (concurrent psql is the other half of the required base functionality to write sensible test cases). If we see it as a manual test tool only, contrib is OK. But it has to be in the backend to be usable in the regression test suite, so thats where it was designed to go. Well, contrib can have its own regression tests. We can put the HOT tests in there too. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] LIMIT/SORT optimization
On Sat, 2007-04-07 at 14:11 -0400, Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: I did some performance testing of the patch, and the results were good. I did this: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; test= SELECT * FROM test ORDER BY x LIMIT 3; LIMIT 3 seems an awfully favorable case; if the patch can only manage a factor of 4 speedup there, what happens at limit 10, 20, 100? Also, you've tested only one sort size and only one (unspecified) value of work_mem, and the usefulness of the patch would surely vary depending on that. In particular, what happens with a LIMIT large enough to overflow work_mem? Yeh, this is really designed to improve the case where we retrieve a screenfull of data. i.e. 25, 50 or 100 records. Or worst case 10 screenfulls. The code deliberately doesn't use an insertion sort for that reason, since that is beyond the cut-off where that works best. So it should be optimised for medium numbers of rows when no index is present. The use case is important because we want to be able to populate data for screens in a reasonably bounded time, not one that gets suddenly worse should the number of possible matches exceed work_mem. [Think how well Google reacts to varying numbers of candidate matches] Whatever happens with LIMIT work_mem doesn't fit the use case, so as long as it is no slower than what we have now, that should be fine. Lastly, I suspect that sorting presorted input might be particularly favorable for this patch. Please try it with random data for comparison. Agreed. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] simply custom variables protection
Patch rejected; please continue discussion and resubmit. --- Pavel Stehule wrote: Hello this patch contains function ArmorCustomVariables. This function set flag armored on any custom variable. From this moment only superuser can change this variable. p.s. use it together with ResetPGVariable() Regards Pavel Stehule _ Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. http://www.msn.cz/ [ Attachment, skipping... ] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] [PATCH] add CLUSTER table USING index (take 3)
Updated patch applied. Thanks. I added a mention of the old syntax at the bottom of the CLUSTER manual page, and cleaned up the grammar a little. Also did a little comment cleaning in gram.y. --- Holger Schurig wrote: SGML ref text (swapped parameter list, changed example text) Also, I noticed that the text of the example spoke about a table employees, but the example used the table emp. I fixed this inconsistency. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + Index: doc/src/sgml/ref/cluster.sgml === RCS file: /cvsroot/pgsql/doc/src/sgml/ref/cluster.sgml,v retrieving revision 1.40 diff -c -c -r1.40 cluster.sgml *** doc/src/sgml/ref/cluster.sgml 1 Feb 2007 00:28:18 - 1.40 --- doc/src/sgml/ref/cluster.sgml 8 Apr 2007 00:23:30 - *** *** 20,27 refsynopsisdiv synopsis ! CLUSTER replaceable class=PARAMETERindexname/replaceable ON replaceable class=PARAMETERtablename/replaceable ! CLUSTER replaceable class=PARAMETERtablename/replaceable CLUSTER /synopsis /refsynopsisdiv --- 20,26 refsynopsisdiv synopsis ! CLUSTER replaceable class=PARAMETERtablename/replaceable [ USING replaceable class=PARAMETERindexname/replaceable ] CLUSTER /synopsis /refsynopsisdiv *** *** 77,95 variablelist varlistentry ! termreplaceable class=PARAMETERindexname/replaceable/term listitem para ! The name of an index. /para /listitem /varlistentry varlistentry ! termreplaceable class=PARAMETERtablename/replaceable/term listitem para ! The name (possibly schema-qualified) of a table. /para /listitem /varlistentry --- 76,94 variablelist varlistentry ! termreplaceable class=PARAMETERtablename/replaceable/term listitem para ! The name (possibly schema-qualified) of a table. /para /listitem /varlistentry varlistentry ! termreplaceable class=PARAMETERindexname/replaceable/term listitem para ! The name of an index. /para /listitem /varlistentry *** *** 172,180 para Cluster the table literalemployees/literal on the basis of !its index literalemp_ind/literal: programlisting ! CLUSTER emp_ind ON emp; /programlisting /para --- 171,179 para Cluster the table literalemployees/literal on the basis of !its index literalemployees_ind/literal: programlisting ! CLUSTER employees USING employees_ind; /programlisting /para *** *** 182,188 Cluster the literalemployees/literal table using the same index that was used before: programlisting ! CLUSTER emp; /programlisting /para --- 181,187 Cluster the literalemployees/literal table using the same index that was used before: programlisting ! CLUSTER employees; /programlisting /para *** *** 198,204 titleCompatibility/title para !There is no commandCLUSTER/command statement in the SQL standard. /para /refsect1 --- 197,208 titleCompatibility/title para !The syntax: ! synopsis ! CLUSTER replaceable class=PARAMETERindexname/replaceable ON replaceable class=PARAMETERtablename/replaceable ! /synopsis ! is also supported for compatibility with pre-8.3 productnamePostgreSQL/ installations. ! There is no commandCLUSTER/command statement in the SQL standard. /para /refsect1 Index: src/backend/parser/gram.y === RCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v retrieving revision 2.586 diff -c -c -r2.586 gram.y *** src/backend/parser/gram.y 2 Apr 2007 22:20:53 - 2.586 --- src/backend/parser/gram.y 8 Apr 2007 00:23:32 - *** *** 209,215 %type str relation_name copy_file_name database_name access_method_clause access_method attr_name ! index_name name file_name %type list func_name handler_name qual_Op qual_all_Op subquery_Op opt_class opt_validator --- 209,215 %type str relation_name copy_file_name database_name access_method_clause access_method attr_name ! index_name name file_name cluster_index_specification %type list func_name handler_name qual_Op qual_all_Op subquery_Op opt_class opt_validator *** *** 5084,5090 /* * * QUERY: ! *load filename * */ --- 5084,5090
Re: [HACKERS] [PATCHES] Optimized pgbench for 8.3
Patch committed. Thanks. -- Tatsuo Ishii SRA OSS, Inc. Japan On 4/6/07, Tatsuo Ishii [EMAIL PROTECTED] wrote: BTW, is anybody working on enabling the fill factor to the tables used by pgbench? 8.3 will introduce HOT, and I think adding the feature will make it easier to test HOT. Please see if the attached patch looks good. It adds a new -F option which can be used to set fillfactor for tellers, accounts and branches tables. Default is 100 and anything between 10 and 100 is acceptable. Thanks, Pavan -- EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PATCHES] LIMIT/SORT optimization
I reran the test using: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; and got on an unpatched system: 1751.320 ms select * from (select * from test order by x limit 3) as x limit 1; 1725.092 ms select * from (select * from test order by x limit 3) as x limit 1; 1709.463 ms select * from (select * from test order by x limit 3) as x limit 1; 1702.917 ms select * from (select * from test order by x limit 10) as x limit 1; 1705.793 ms select * from (select * from test order by x limit 10) as x limit 1; 1704.046 ms select * from (select * from test order by x limit 10) as x limit 1; 1699.730 ms select * from (select * from test order by x limit 100) as x limit 1; 1712.628 ms select * from (select * from test order by x limit 100) as x limit 1; 1699.454 ms select * from (select * from test order by x limit 100) as x limit 1; 1720.207 ms select * from (select * from test order by x limit 1000) as x limit 1; 1725.519 ms select * from (select * from test order by x limit 1000) as x limit 1; 1728.933 ms select * from (select * from test order by x limit 1000) as x limit 1; 1699.609 ms select * from (select * from test order by x limit 1) as x limit 1; 1698.386 ms select * from (select * from test order by x limit 1) as x limit 1; 1698.985 ms select * from (select * from test order by x limit 1) as x limit 1; 1700.740 ms select * from (select * from test order by x limit 10) as x limit 1; 1700.989 ms select * from (select * from test order by x limit 10) as x limit 1; 1695.771 ms select * from (select * from test order by x limit 10) as x limit 1; which is expected because the sort work is constant. With the patch I see: 433.892 ms select * from (select * from test order by x limit 3) as x limit 1; 496.016 ms select * from (select * from test order by x limit 3) as x limit 1; 434.604 ms select * from (select * from test order by x limit 3) as x limit 1; 433.265 ms select * from (select * from test order by x limit 10) as x limit 1; 432.058 ms select * from (select * from test order by x limit 10) as x limit 1; 431.329 ms select * from (select * from test order by x limit 10) as x limit 1; 429.722 ms select * from (select * from test order by x limit 100) as x limit 1; 434.754 ms select * from (select * from test order by x limit 100) as x limit 1; 429.758 ms select * from (select * from test order by x limit 100) as x limit 1; 432.060 ms select * from (select * from test order by x limit 1000) as x limit 1; 432.523 ms select * from (select * from test order by x limit 1000) as x limit 1; 433.917 ms select * from (select * from test order by x limit 1000) as x limit 1; 449.885 ms select * from (select * from test order by x limit 1) as x limit 1; 450.182 ms select * from (select * from test order by x limit 1) as x limit 1; 450.536 ms select * from (select * from test order by x limit 1) as x limit 1; 1771.807 ms select * from (select * from test order by x limit 10) as x limit 1; 1746.628 ms select * from (select * from test order by x limit 10) as x limit 1; 1795.600 ms select * from (select * from test order by x limit 10) as x limit 1; The patch is faster until we hit 100k or 10% of the table, at which point it is the same speed. What is interesting is 1M is also the same speed: 1756.401 ms select * from (select * from test order by x limit 100) as x limit 1; 1744.104 ms select * from (select * from test order by x limit 100) as x limit 1; 1734.198 ms select * from (select * from test order by x limit 100) as x limit 1; This is with the default work_mem of '1M'. I used LIMIT 1 so the times were not affected by the size of the data transfer to the client. --- Bruce Momjian wrote: I did some performance testing of the patch, and the results were good. I did this: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; test= SELECT * FROM test ORDER BY x LIMIT 3; and the results where, before the patch, for three runs: LOG: duration: 1753.518 ms statement: select * from test order by x limit 3; LOG: duration: 1766.019 ms statement: select * from test order by x limit 3; LOG: duration: 1777.520 ms statement: select * from test order by x limit 3; and after the patch: LOG: duration: 449.649 ms statement:
Re: [PATCHES] Make CLUSTER MVCC-safe
Heikki Linnakangas [EMAIL PROTECTED] writes: This patch makes CLUSTER MVCC-safe. Visibility information and update chains are preserved like in VACUUM FULL. Here's an update, fixing conflict by Tom's recent commit of Simon's patch to skip WAL-inserts when archiving is not enabled. Applied with revisions. There were some bugs in it: you need to check both xmin and tid when determining if one tuple chains to another, and you can't separate MarkBufferDirty from the critical section that writes xlog. (I got around that by not keeping the working page in buffers at all, the same way btree index build works; should be a bit faster as well as more correct.) It had some memory leakage too. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] RESET SESSION v2
Neil Conway [EMAIL PROTECTED] writes: * ResetPlanCache() is implemented as PlanCacheCallback((Datum)0, InvalidOid); That seems to leave plans for utility commands untouched. Is it problem? Yes, I'd think you'd also want to cleanup plans for utility commands. Utility commands haven't got plans. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Heap page diagnostic/test functions (v2)
Simon Riggs [EMAIL PROTECTED] writes: If we see it as a manual test tool only, contrib is OK. But it has to be in the backend to be usable in the regression test suite, so thats where it was designed to go. The core regression tests have depended on some contrib stuff forever, so the above argument holds no water. I'm uncomfortable with putting these things in core because I suspect it's not hard to crash the backend (or worse) by feeding one of them artfully corrupted data. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PATCHES] LIMIT/SORT optimization
Oh, sorry, forgot to do a random table test. The test used: DROP TABLE test; CREATE TABLE test (x INTEGER); INSERT INTO test SELECT random()*100 FROM generate_series(1, 100); As expected the unpatched version is consistent for all LIMIT values (first query was slow due to load after INSERT): 14567.074 ms select * from (select * from test order by x limit 3) as x limit 1; 4031.029 ms select * from (select * from test order by x limit 3) as x limit 1; 3612.417 ms select * from (select * from test order by x limit 3) as x limit 1; 3505.966 ms select * from (select * from test order by x limit 10) as x limit 1; 3707.830 ms select * from (select * from test order by x limit 10) as x limit 1; 3619.410 ms select * from (select * from test order by x limit 10) as x limit 1; 5548.770 ms select * from (select * from test order by x limit 100) as x limit 1; 3839.660 ms select * from (select * from test order by x limit 100) as x limit 1; 4098.445 ms select * from (select * from test order by x limit 100) as x limit 1; 3677.659 ms select * from (select * from test order by x limit 1000) as x limit 1; 3956.980 ms select * from (select * from test order by x limit 1000) as x limit 1; 3824.934 ms select * from (select * from test order by x limit 1000) as x limit 1; 4641.589 ms select * from (select * from test order by x limit 1) as x limit 1; 4057.902 ms select * from (select * from test order by x limit 1) as x limit 1; 4682.779 ms select * from (select * from test order by x limit 1) as x limit 1; 4032.351 ms select * from (select * from test order by x limit 10) as x limit 1; 4572.528 ms select * from (select * from test order by x limit 10) as x limit 1; 4985.500 ms select * from (select * from test order by x limit 10) as x limit 1; 4942.422 ms select * from (select * from test order by x limit 100) as x limit 1; 4669.230 ms select * from (select * from test order by x limit 100) as x limit 1; 4639.258 ms select * from (select * from test order by x limit 100) as x limit 1; and with the patch: 1731.234 ms select * from (select * from test order by x limit 3) as x limit 1; 570.315 ms select * from (select * from test order by x limit 3) as x limit 1; 430.119 ms select * from (select * from test order by x limit 3) as x limit 1; 431.580 ms select * from (select * from test order by x limit 10) as x limit 1; 431.253 ms select * from (select * from test order by x limit 10) as x limit 1; 432.112 ms select * from (select * from test order by x limit 10) as x limit 1; 433.536 ms select * from (select * from test order by x limit 100) as x limit 1; 433.115 ms select * from (select * from test order by x limit 100) as x limit 1; 432.478 ms select * from (select * from test order by x limit 100) as x limit 1; 442.886 ms select * from (select * from test order by x limit 1000) as x limit 1; 442.133 ms select * from (select * from test order by x limit 1000) as x limit 1; 444.905 ms select * from (select * from test order by x limit 1000) as x limit 1; 522.782 ms select * from (select * from test order by x limit 1) as x limit 1; 521.481 ms select * from (select * from test order by x limit 1) as x limit 1; 521.526 ms select * from (select * from test order by x limit 1) as x limit 1; 3317.216 ms select * from (select * from test order by x limit 10) as x limit 1; 3365.467 ms select * from (select * from test order by x limit 10) as x limit 1; 3355.447 ms select * from (select * from test order by x limit 10) as x limit 1; 3307.745 ms select * from (select * from test order by x limit 100) as x limit 1; 3315.602 ms select * from (select * from test order by x limit 100) as x limit 1; 3585.736 ms select * from (select * from test order by x limit 100) as x limit 1; --- Bruce Momjian wrote: I reran the test using: test= CREATE TABLE test (x INTEGER); test= INSERT INTO test SELECT * FROM generate_series(1, 100); test= SET log_min_duration_statement = 0; and got on an unpatched system: 1751.320 ms select * from (select * from test order by x limit 3) as x limit 1; 1725.092 ms select * from (select * from test order by x limit 3) as x limit 1; 1709.463 ms select * from (select * from test order by x limit 3) as x limit 1; 1702.917 ms select * from (select * from test order by x limit 10) as x limit 1; 1705.793 ms select * from (select * from test order by x limit 10) as x limit 1;