Re: [HACKERS] We are not following the spec for HAVING without GROUP
On Thu, 10 Mar 2005, Tom Lane wrote: Would those of you with access to other DBMSes try this: create table tab (col integer); select 1 from tab having 1=0; select 1 from tab having 1=1; insert into tab values(1); insert into tab values(2); select 1 from tab having 1=0; select 1 from tab having 1=1; I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows from the 4 selects --- that is, the contents of tab make no difference at all. (MySQL returns 0, 0, 0, and 2 rows, so they are definitely copying our mistake...) DB2 (version 8.1) gives 0, 1, 0, 1. - Heikki ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Runtime accepting build discrepancies
Laszlo Hornyak wrote: The default should be the default used by PostgreSQL, and the extra ones should be commented out under it. Not the most user friendly solution, but can we do anything else? Yes, we can do as I suggested and select mapping depending on the GUC variable integer_datetimes. I guess you have some initial handshake between the postgresql backend and the JVM where you can negotiate things like that? Regards, Thomas Hallgren ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [HACKERS] WAL: O_DIRECT and multipage-writer (+
On Tue, 2005-03-01 at 13:53 -0800, Mark Wong wrote: On Thu, Feb 03, 2005 at 07:25:55PM +0900, ITAGAKI Takahiro wrote: Hello everyone. I fixed two bugs in the patch that I sent before. Check and test new one, please. Ok, finally got back into the office and was able to run 1 set of tests. So the new baseline result with 8.0.1: http://www.osdl.org/projects/dbt2dev/results/dev4-010/309/ Throughput: 3639.97 Results with the patch but open_direct not set: http://www.osdl.org/projects/dbt2dev/results/dev4-010/308/ Throughput: 3494.72 Results with the patch and open_direct set: http://www.osdl.org/projects/dbt2dev/results/dev4-010/312/ Throughput: 3489.69 You can verify that the wall_sync_method is set to open_direct under the database parameters link, but I'm wondering if I missed something. It looks a little odd the the performance dropped. Is there anything more to say on this? Is it case-closed, or is there further work underway - I can't see any further chat on this thread. These results show it doesn't work better on larger systems. The original testing showed it worked better on smaller systems - is there still scope to include this for smaller configs? If not, thanks for taking the time to write the patch and investigate whether changes in this area would help. Not every performance patch improves things, but that doesn't mean we shouldn't try... Best Regards, Simon Riggs ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] fool-toleranced optimizer
Greg Stark wrote: Kevin Brown [EMAIL PROTECTED] writes: Hence, it makes sense to go ahead and run the query, but issue a warning at the very beginning, e.g. WARNING: query JOINs tables list of tables without otherwise referencing or making use of those tables. This may cause excessively poor performance of the query. Well the problem with a warning is what if it *is* intentional? It's not ok to fill my logs up with warnings for every time the query is executed. That just forces me to turn off warnings. It would be ok to have an option to block cartesian joins entirely. I might even choose to run with that enabled normally. I can always disable it for queries I know need cartesion joins. I'm not sure the cartesian join is the problem - it's the explosion in number of rows. Which suggests you want something analogous to statement_timeout. Perhaps something like: statement_max_select_rows = 0 # 0=disabled statement_max_update_rows = 0 # applies to insert/delete too That has the bonus of letting you set statement_max_update_rows=1 in an interactive session and catching WHERE clause typos. On the down-side, it means 2 more GUC variables and I'm not sure how practical/efficient it is to detect a resultset growing beyond that size. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Raw size
varchar means 'character varying'. What varies is the length. So a varchar(1000) with 'foo' in it only takes a few bytes ('foo' plus length info) instead of 1000 bytes. Yes i know it, but i have vorgotten to inform you that all the values of this attribute have really 1000 characthers length. If you really want a fixed-length field, nchar or char should do what you want. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Raw size
Is there any compression or what? Yes, there is: http://www.postgresql.org/docs/8.0/interactive/storage-toast.html thanks, is there any way to increase the limit, upper wich toast strategy is selected? By defaullt is Block_size/4 = about 2000 Bytes. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Explain Analyze mode
Hi, i found this form of output of explain analyze, watching some old mails in lists. test4=# explain analyze select * from patients; LOG: query: explain analyze select * from patients; LOG: duration: 0.603887 sec LOG: QUERY STATISTICS ! system usage stats: ! 0.624269 elapsed 0.458985 user 0.123047 system sec ! [0.468750 user 0.125000 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 7/4 [310/158] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! buffer usage stats: ! Shared blocks: 2742 read, 0 written, buffer hit rate = 3.59% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written QUERY PLAN - Seq Scan on patients (cost=0.00..4048.60 rows=131960 width=172) (actual time=0.04..562.97 rows=133886 loops=1) Total runtime: 602.42 msec (2 rows) How can i turn my configuration to achieve this output for explain analyze (and only the QUERY PLAN, as like tomorrow)? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Grant ALL on schema
On Thu, Mar 10, 2005 at 09:01:21 -0800, Hemapriya [EMAIL PROTECTED] wrote: Hi, Can anybody know how the following statement work. Grant ALL on SCHEMA test to user 'user1'; will the user be able to have all the privileges on all the objects/tables on schema test? Or he can only create new objects in that schema. That will grant CREATE and USAGE access to the schema, so that the user will be able to look up objects in the schema and create new objects in the schema. It won't change the access rights associated with objects in the schema. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Explain Analyze mode
See your postgresql.conf for log_planner_stats = true #false log_executor_stats = true #false #log_statement_stats = false Oleg On Fri, 11 Mar 2005, Ioannis Theoharis wrote: Hi, i found this form of output of explain analyze, watching some old mails in lists. test4=# explain analyze select * from patients; LOG: query: explain analyze select * from patients; LOG: duration: 0.603887 sec LOG: QUERY STATISTICS ! system usage stats: ! 0.624269 elapsed 0.458985 user 0.123047 system sec ! [0.468750 user 0.125000 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 7/4 [310/158] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! buffer usage stats: ! Shared blocks: 2742 read, 0 written, buffer hit rate = 3.59% ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written QUERY PLAN - Seq Scan on patients (cost=0.00..4048.60 rows=131960 width=172) (actual time=0.04..562.97 rows=133886 loops=1) Total runtime: 602.42 msec (2 rows) How can i turn my configuration to achieve this output for explain analyze (and only the QUERY PLAN, as like tomorrow)? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] postgreSQL-8.0.1 configure --enable-thread-safety with
Neil Conway wrote: Bruce Momjian wrote: The attached patch should remove the warnings but I am not applying it because a non-static/extern global variable should be fine in C code. What's the harm in applying it? Variables and functions not used outside the compilation unit in which they are defined _should_ be marked static; it's not required, but I think it's good style. I didn't want to do it because I thought we would then have to do it in a lot of places, but I see pg_ctl.c does it, so I applied the patch, and fixed initdb.c to be 'static-clean' too. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] postgreSQL-8.0.1 configure --enable-thread-safety with
BVikram Kalsi wrote: I was ignoring the warnings anyway. I didn't look into that much but after upgrading to RHEL AS4, I am able to compile successfully with --enable-thread-safety OK, so there was some problem with AS3 and its use of the thread library. Glad it is working now. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression tests
Tom Lane wrote: [EMAIL PROTECTED] writes: Please see my posting about using a macro for snprintf. Wasn't the issue about odd behavior of the Win32 linker choosing the wrong vnsprintf? You're right, the point about the macro was to avoid linker weirdness on Windows. We need to do that part in any case. I think Bruce confused that issue with the one about whether our version supported %n$ adequately ... which it doesn't just yet ... Perhaps I am reading old email in this reply but I thought I should clarify: Once we do: #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) #define snprintf(...) pg_snprintf(__VA_ARGS__) #define printf(...) pg_printf(__VA_ARGS__) we also rename the functions in snprintf.c to pg_* names so there is no longer a conflict with the system libc versions. The macro is to prevent our snprintf from leaking out of libraries like libpq, not to fix the win32 linker problem, which we already had fixed by reordering the entries in the C file. Perhaps the macro idea originally came as a fix for Win32 but it is much larger that that now. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression tests
Nicolai Tufar wrote: On Thu, 10 Mar 2005 16:26:47 -0500 (EST), Bruce Momjian pgman@candle.pha.pa.us wrote: Please see my posting about using a macro for snprintf. If the current implementation of snprintf is enough for our existing translation users we probably don't need to add anything more to it because snprintf will not be exported to client applications. Oh, Bruce. It will be the best solution. I was worried about the problems with my modifications to snprintf.c Tom Lane pointed out. But if we really separate snprintf() used by messages and snprintf() used by the like of src/backend/utils/adt/int8.c then we are safe. We can claim current release safe and I will modify src/port/snprintf.c at my leisure later. I will try out your modifications tomorrow. It is late here and I have a PostgreSQL class to to teach tomorrow ;) I still think that it is more convenient to rip off current implementation of snprintf.c and replace it with a very much stripped down of Trio's one. I will work on it and try to get a patch in one week's time. Thank you all for your patience. I am not heading in the direction of using a different snprintf for messages and for int8.c. I am just renaming the calls via macros so we don't leak snprintf from libpq. One new idea I had was to have pg_snprintf() look over the format string and adjust the arguments to match what the format string is requesting, remove %$ from the format string, and then pass it to the native libc snprintf(). That might be the easiest solution for the many platforms with a good snprintf but not %$ support. Is that possible/easier? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] TODO item: support triggers on columns
Chris Mair wrote: Hello, I'd like to start working on the following TODO item: Referential Integrity / Support triggers on columns Is somebody else already working on this? No one, I think. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL pam ldap document
Adrian Nida wrote: All, I visited #postgresql @ FreeNode and asked about how to make pg use pam about a week ago (specifically I wanted to auth against LDAP). I was told to figure it out and write a doc... Here is my attempt at doing so: http://itc.musc.edu/wiki/PostGreSQL Please review for accuracy and/or proofreading. I get a not exists error on that URL. I assume you looked at: http://www.postgresql.org/docs/8.0/interactive/auth-methods.html#AUTH-PAM Do you have additions to it? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] PostgreSQL pam ldap document
On Fri, Mar 11, 2005 at 11:42:53AM -0500, Bruce Momjian wrote: Adrian Nida wrote: All, I visited #postgresql @ FreeNode and asked about how to make pg use pam about a week ago (specifically I wanted to auth against LDAP). I was told to figure it out and write a doc... Here is my attempt at doing so: http://itc.musc.edu/wiki/PostGreSQL Please review for accuracy and/or proofreading. I get a not exists error on that URL. http://itc.musc.edu/wiki/PostgreSQL (only 4 capital letters) works. I assume you looked at: http://www.postgresql.org/docs/8.0/interactive/auth-methods.html#AUTH-PAM Do you have additions to it? 'pears so :) Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression
Tom Lane wrote: [EMAIL PROTECTED] writes: Please see my posting about using a macro for snprintf. Wasn't the issue about odd behavior of the Win32 linker choosing the wrong vnsprintf? You're right, the point about the macro was to avoid linker weirdness on Windows. We need to do that part in any case. I think Bruce confused that issue with the one about whether our version supported %n$ adequately ... which it doesn't just yet ... Perhaps I am reading old email in this reply but I thought I should clarify: Once we do: #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) #define snprintf(...) pg_snprintf(__VA_ARGS__) #define printf(...) pg_printf(__VA_ARGS__) I'm not sure that macros can have variable number of arguments on all supported platforms. I've been burnt by this before. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] TODO item: support triggers on columns
On Fri, Mar 11, 2005 at 11:32:04AM -0500, Bruce Momjian wrote: Chris Mair wrote: Hello, I'd like to start working on the following TODO item: Referential Integrity / Support triggers on columns Is somebody else already working on this? No one, I think. Isn't this the REFERENCING clause? I think there was a partial patch submitted already for this. -- Alvaro Herrera ([EMAIL PROTECTED]) Schwern It does it in a really, really complicated way crab why does it need to be complicated? Schwern Because it's MakeMaker. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression
[EMAIL PROTECTED] wrote: Tom Lane wrote: [EMAIL PROTECTED] writes: Please see my posting about using a macro for snprintf. Wasn't the issue about odd behavior of the Win32 linker choosing the wrong vnsprintf? You're right, the point about the macro was to avoid linker weirdness on Windows. We need to do that part in any case. I think Bruce confused that issue with the one about whether our version supported %n$ adequately ... which it doesn't just yet ... Perhaps I am reading old email in this reply but I thought I should clarify: Once we do: #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) #define snprintf(...) pg_snprintf(__VA_ARGS__) #define printf(...) pg_printf(__VA_ARGS__) I'm not sure that macros can have variable number of arguments on all supported platforms. I've been burnt by this before. The actual patch is: + #ifdef __GNUC__ + #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) + #define snprintf(...) pg_snprintf(__VA_ARGS__) + #define printf(...) pg_printf(__VA_ARGS__) + #else + #define vsnprintf pg_vsnprintf + #define snprintf pg_snprintf + #define printfpg_printf + #endif -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] TODO item: support triggers on columns
Alvaro Herrera wrote: On Fri, Mar 11, 2005 at 11:32:04AM -0500, Bruce Momjian wrote: Chris Mair wrote: Hello, I'd like to start working on the following TODO item: Referential Integrity / Support triggers on columns Is somebody else already working on this? No one, I think. Isn't this the REFERENCING clause? I think there was a partial patch submitted already for this. The patch appears unrelated to column-level triggers: http://momjian.postgresql.org/cgi-bin/pgpatches2 --- The attached patch adds the optional REFERENCES syntax in CREATE TRIGGER statement to make an automatic alias for OLD/NEW record during trigger setup. The implementation of this new feature makes CREATE TRIGGER command more compatible to SQL standard, and allows the future implementation of executing SQL commands in trigger action. After the implementation, the extended syntax of statement is as follows. CREATE TRIGGER name BEFORE|AFTER INSERT|DELETE|UPDATE [OR...] ON tablename [REFERENCING OLD|NEW [AS] identifier] [FOR [EACH] ROW|STATEMENT] EXECUTE PROCEDURE funcname (arguments) -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] TODO item: support triggers on columns
I'd like to start working on the following TODO item: Referential Integrity / Support triggers on columns Is somebody else already working on this? No one, I think. Isn't this the REFERENCING clause? I think there was a partial patch submitted already for this. Those are two different things: http://www.postgresql.org/docs/8.0/static/sql-createtrigger.html#SQL-CREATETRIGGER-COMPATIBILITY Bye, Chris. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression
Bruce Momjian pgman@candle.pha.pa.us writes: I'm not sure that macros can have variable number of arguments on all supported platforms. I've been burnt by this before. The actual patch is: + #ifdef __GNUC__ + #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) + #define snprintf(...) pg_snprintf(__VA_ARGS__) + #define printf(...) pg_printf(__VA_ARGS__) + #else + #define vsnprintf pg_vsnprintf + #define snprintf pg_snprintf + #define printfpg_printf + #endif Uh, why bother with the different approach for gcc? Also, what happened to fprintf? We're going to need that too for localization of the client programs. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression
Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: I'm not sure that macros can have variable number of arguments on all supported platforms. I've been burnt by this before. The actual patch is: + #ifdef __GNUC__ + #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) + #define snprintf(...) pg_snprintf(__VA_ARGS__) + #define printf(...) pg_printf(__VA_ARGS__) + #else + #define vsnprintf pg_vsnprintf + #define snprintf pg_snprintf + #define printfpg_printf + #endif Uh, why bother with the different approach for gcc? Because if we don't do that then the code above fails: extern int pg_snprintf(char *str, size_t count, const char *fmt,...) /* This extension allows gcc to check the format string */ __attribute__((format(printf, 3, 4))); The issue is that the printf here is interpreted specially by the compiler to mean check arguments as printf. If the preprocessor changes that, we get a failure. The good news is that only gcc supports arg checking using __attribute__ and it also supports the __VA_ARGS__ macros. What I think we do lose is argument checking for non-gcc, but this seems as close as we can get. Also, what happened to fprintf? We're going to need that too for localization of the client programs. It was never there. I will add it now. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression
Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: I'm not sure that macros can have variable number of arguments on all supported platforms. I've been burnt by this before. The actual patch is: + #ifdef __GNUC__ + #define vsnprintf(...)pg_vsnprintf(__VA_ARGS__) + #define snprintf(...) pg_snprintf(__VA_ARGS__) + #define printf(...) pg_printf(__VA_ARGS__) + #else + #define vsnprintf pg_vsnprintf + #define snprintf pg_snprintf + #define printfpg_printf + #endif Uh, why bother with the different approach for gcc? Because if we don't do that then the code above fails: extern int pg_snprintf(char *str, size_t count, const char *fmt,...) /* This extension allows gcc to check the format string */ __attribute__((format(printf, 3, 4))); The issue is that the printf here is interpreted specially by the compiler to mean check arguments as printf. If the preprocessor changes that, we get a failure. The good news is that only gcc supports arg checking using __attribute__ and it also supports the __VA_ARGS__ macros. What I think we do lose is argument checking for non-gcc, but this seems as close as we can get. I am adding a comment explaining why those macros are used. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Bumping libpq version number?
Are we still bumping the libpq major version number for 8.0.2? I think it is a bad idea because we will require too many client apps to be recompiled, and we have had few problem reports. We do need to bump the major version number for 8.1 and I am doing that now. One new problem I see is that changes to libpgport could affect client apps that call libpq because they pull functions from pgport via libpq. For example, now that snprintf is called pg_snprintf, my initdb failed in the regression tests because the the new initdb binary used pg_snprintf but the installed libpq (ld.so.conf) didn't have it yet. The bottom line is that we only used to require major libpq version bumps when we changed the libpq API. Now, with libpgport, I am concerned that changes in libpgport also will require a major version bump. This adds support to the idea that we will have to do a major libpq bump for every major release. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] [HACKERS] PostgreSQL pam ldap document
Snip/ Here is my attempt at doing so: http://itc.musc.edu/wiki/PostGreSQL Snip/ I get a not exists error on that URL. Sorry, I renamed the URL after someone pointed out the correct spelling. This was a link to the old one. I apologize for the confusion, the right URL is: http://itc.musc.edu/wiki/PostgreSQL I assume you looked at: http://www.postgresql.org/docs/8.0/interactive/auth-methods.html#AUTH-PAM Do you have additions to it? Yes, I did look at it. No offense to the original author, but my doc has a lot more than the four sentences that are there. I was hoping it would help others in my situation. Again any and all comments/questions/blah are appreciated. Thanks, Adrian begin:vcard fn:Adrian Michael Nida n:Nida;Adrian org:Medical University of South Carolina;Hypertension Initiative adr:P.O. Box 250951;;135 Rutledge Street Room 1112;Charleston;SC;29425;United States of America email;internet:[EMAIL PROTECTED] title:Systems Programmer II tel;work:(843) 792-0831 tel;fax:(843) 792-0816 x-mozilla-html:FALSE url:http://worst2first.musc.edu version:2.1 end:vcard ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Bumping libpq version number?
* Bruce Momjian (pgman@candle.pha.pa.us) wrote: Are we still bumping the libpq major version number for 8.0.2? I think it is a bad idea because we will require too many client apps to be recompiled, and we have had few problem reports. We do need to bump the major version number for 8.1 and I am doing that now. One new problem I see is that changes to libpgport could affect client apps that call libpq because they pull functions from pgport via libpq. For example, now that snprintf is called pg_snprintf, my initdb failed in the regression tests because the the new initdb binary used pg_snprintf but the installed libpq (ld.so.conf) didn't have it yet. The bottom line is that we only used to require major libpq version bumps when we changed the libpq API. Now, with libpgport, I am concerned that changes in libpgport also will require a major version bump. This adds support to the idea that we will have to do a major libpq bump for every major release. Uh, major libpq version bumps should happen when there's an incompatible ABI change. I'm not entirely sure how libpgport relates, but libpq versions shouldn't be explicitly linked to major release numbers and it's possible for them to change between major releases... Stephen signature.asc Description: Digital signature
Re: [HACKERS] Data loss, vacuum, transaction wrap-around
On Sat, Feb 19, 2005 at 04:49:03PM -0500, [EMAIL PROTECTED] wrote: PostgreSQL is such an awesome project. The only thing it seems to suffer from is a disregard for its users. Gee. And all this time I thought that free support from the guy who wrote the code and gave it to you was better regard for the users that cryptic support by someone whos is reading a script and who's afraid of the legal department. Silly me. A -- Andrew Sullivan | [EMAIL PROTECTED] When my information changes, I alter my conclusions. What do you do sir? --attr. John Maynard Keynes ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Bumping libpq version number?
Kurt Roeckx wrote: On Fri, Mar 11, 2005 at 12:58:28PM -0500, Bruce Momjian wrote: Are we still bumping the libpq major version number for 8.0.2? I think it is a bad idea because we will require too many client apps to be recompiled, and we have had few problem reports. We do need to bump the major version number for 8.1 and I am doing that now. One new problem I see is that changes to libpgport could affect client apps that call libpq because they pull functions from pgport via libpq. For example, now that snprintf is called pg_snprintf, my initdb failed in the regression tests because the the new initdb binary used pg_snprintf but the installed libpq (ld.so.conf) didn't have it yet. Does initdb call pg_snprintf directly? Or does it call some libpq function that calls it? With the current CVS, initdb calls pg_snprintf() on my platform which doesn't support %$ natively on my libc printf. Now, initdb could pull from pgport itself but I think it pulled from libpq first. Perhaps we should reorder how those libraries appear in the link line but I think that would fix just this case, not the more general one of pg client apps. The bottom line is that we only used to require major libpq version bumps when we changed the libpq API. Now, with libpgport, I am concerned that changes in libpgport also will require a major version bump. This adds support to the idea that we will have to do a major libpq bump for every major release. Soname changes really should only happen in case of API or ABI changes and I think you really should try to avoid them. I'm not sure why you think it's required now. We had the problem with 8.0.X where we remove get_progname from libpq and psql and friends were pulling that from libpq in the past for 7.4. Also, I think it's alot better to actually do soname changes to libraries if things can break. I don't see having 2 library versions around as a problem. And I'd rather have something I just know is not going to work. Yes, that is where I am leaning in this discussion because libpgport varies much more frequently than the libpq API. However, changing the link order should fix that so let me do that and see if it helps. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Cost of XLogInsert CRC calculations
One of the things I was thinking about was whether we could use up those cycles more effectively. If we were to include a compression routine before we calculated the CRC that would - reduce the size of the blocks to be written, hence reduce size of xlog - reduce the following CRC calculation I was thinking about using a simple run-length encoding to massively shrink half-empty blocks with lots of zero padding, but we've already got code to LZW the data down also. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] Simon, I think having a compression routine in there could make real sense. We have done some major I/O testing involving compression for a large customer some time ago. We have seen that compressing / decompressing on the fly is in MOST cases much faster than uncompressed I/O (try a simple cat file | ... vs. zcat file.gz | ...) - the zcat version will be faster on all platforms we have tried (Linux, AIX, Sun on some SAN system, etc. ...). Also, when building up a large database within one transaction the xlog will eat a lot of storage - this can be quite annoying when you have to deal with a lot of data). Are there any technical reasons which would prevent somebody from implementing compression? Best regards, Hans -- Cybertec Geschwinde u Schoenig Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/660/816 40 77 www.cybertec.at, www.postgresql.at ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] SQL99 Hierarchical queries
Hi Evgen, How's the syncing with HEAD going? Cheers, Chris I'm working on it. Regards, Evgen. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Bumping libpq version number?
Kurt Roeckx wrote: On Fri, Mar 11, 2005 at 01:29:46PM -0500, Bruce Momjian wrote: Kurt Roeckx wrote: Does initdb call pg_snprintf directly? Or does it call some libpq function that calls it? With the current CVS, initdb calls pg_snprintf() on my platform which doesn't support %$ natively on my libc printf. Now, initdb could pull from pgport itself but I think it pulled from libpq first. Perhaps we should reorder how those libraries appear in the link line but I think that would fix just this case, not the more general one of pg client apps. Do client apps ever link to pgport itself? I assume only internal applictions link to it? They do. I assume libpq is staticly linked to pgport and is exporting symbols it shouldn't. Can we prevent it from exporting those symbols? I can think of no way to prevent it, except on Win32 that has an exports file. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] [PATCHES] BUG #1466: syslogger issues
I would like to apply this patch, and I think it is important enough that it should be backpatched in to 8.0.X. Any objections? --- Magnus Hagander wrote: There is special code in the send_message_to_server_log function to make sure it's written directly to the file. If the logger is complaining, it's quite possibly because it's unable to write to its file. Now that you mention it, doesn't this code go into infinite recursion if write_syslogger_file_binary() tries to ereport? Yes, apparently. Actually, elog.c code should look like this: if ((Log_destination LOG_DESTINATION_STDERR) ...) { if (am_syslogger) write_syslogger_file(buf.data, buf.len); else fwrite(buf.data, 1, buf.len, stderr); } This avoids unnecessary pipe traffic (which might fail too) and gettext translation. That's sort of what I thought, but without being certain at all. Next, the elog call in write_syslogger_file_binary will almost certainly loop, so it should call write_stderr then (since eventlog is usually fixed-size with cyclic writing, even in out-of-disk-space conditions something might get logged). Ok. I've included these changes in the attached patch. Haven't tested those specific codepaths, but the other changes still work... 3rd, I've been proposing to have redirect_stderr=true on by default at least on win32 earlier, I still think this is reasonable. It's already the default if you install from the MSI installer. //Magnus Content-Description: stderr.patch [ Attachment, skipping... ] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Bumping libpq version number?
Bruce Momjian wrote: Are we still bumping the libpq major version number for 8.0.2? Yes. I think it is a bad idea because we will require too many client apps to be recompiled, That is not true, as previously discussed. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] [pgsql-hackers-win32] Repleacement for src/port/snprintf.c
I have reviewed this patch, and I already added these changes myself in CVS. Thanks. --- Nicolai Tufar wrote: On Mon, Feb 21, 2005 at 10:53:08PM -0500, Bruce Momjian wrote: Applied. Thanks a lot. The patch attached solves the tread safety problem. Please review it before applying, I am not sure I am doing the right thing On Tue, 22 Feb 2005 19:57:15 +0100, Kurt Roeckx [EMAIL PROTECTED] wrote: The configure test is a little broken. It needs to quote the $'s. I've rewritten the test a little. This one needs applying too. $'s do get scrambled. Best regards, Nicolai. [ Attachment, skipping... ] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Bumping libpq version number?
Peter Eisentraut wrote: Bruce Momjian wrote: Are we still bumping the libpq major version number for 8.0.2? Yes. I think it is a bad idea because we will require too many client apps to be recompiled, That is not true, as previously discussed. It is not true only if the old libpq stays around, which isn't always the case. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Bumping libpq version number?
pgman wrote: Peter Eisentraut wrote: Bruce Momjian wrote: Are we still bumping the libpq major version number for 8.0.2? Yes. I think it is a bad idea because we will require too many client apps to be recompiled, That is not true, as previously discussed. It is not true only if the old libpq stays around, which isn't always the case. In fact, based on the few complaints we have heard about the current situation, I am sure we are going to get many more complaints if we bump up the major version in 8.0.2. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] BUG #1466: syslogger issues
Bruce Momjian pgman@candle.pha.pa.us writes: I would like to apply this patch, and I think it is important enough that it should be backpatched in to 8.0.X. Any objections? I wanted to review the patch before it went in. Will try to get to it soon. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [PATCHES] BUG #1466: syslogger issues
Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: I would like to apply this patch, and I think it is important enough that it should be backpatched in to 8.0.X. Any objections? I wanted to review the patch before it went in. Will try to get to it soon. OK, I will just keep it in the patch queue. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Bumping libpq version number?
On Fri, 11 Mar 2005, Bruce Momjian wrote: Peter Eisentraut wrote: Bruce Momjian wrote: Are we still bumping the libpq major version number for 8.0.2? Yes. I think it is a bad idea because we will require too many client apps to be recompiled, That is not true, as previously discussed. It is not true only if the old libpq stays around, which isn't always the case. that's an administrative decision though, not ours ... we have source code going back to 6.x, so it isn't like older versions aren't available ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Bumping libpq version number?
Bruce Momjian wrote: In fact, based on the few complaints we have heard about the current situation, I am sure we are going to get many more complaints if we bump up the major version in 8.0.2. The lack of complaints is because the packagers that have recognized the problem are refusing the package PostgreSQL 8.0 until the problem is resolved. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Bumping libpq version number?
Bruce Momjian wrote: It is not true only if the old libpq stays around, which isn't always the case. I think that would clearly be a Don't-do-that-then situation. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Bumping libpq version number?
Peter Eisentraut wrote: Bruce Momjian wrote: In fact, based on the few complaints we have heard about the current situation, I am sure we are going to get many more complaints if we bump up the major version in 8.0.2. The lack of complaints is because the packagers that have recognized the problem are refusing the package PostgreSQL 8.0 until the problem is resolved. OK, I will bump them. What should I do with 8.1? Make it another major bump? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [pgsql-hackers-win32] [HACKERS] snprintf causes regression tests to fail
On Thu, 10 Mar 2005 19:21:41 -0500 (EST), Bruce Momjian pgman@candle.pha.pa.us wrote: The CVS-tip implementation is fundamentally broken and won't work even for our internal uses. I've not wasted time complaining about it because I thought we were going to replace it. If we can't find a usable replacement then we're going to have to put a lot of effort into fixing what's there. On the whole I think the effort would be better spent importing someone else's solution. Oh, so our existing implementation doesn't even meet our needs. OK. Which made me wander why did I not aggree with Tom Lane's suggestion to make do three passes instead of two. Tom was right, as usual. It happened to be much easier than I expected. The patch is attached. Please apply. Tom, what do you think? Will it be fine with you? Best regards, Nicolai *** ./src/port/snprintf.c.orig Sat Mar 12 01:28:49 2005 --- ./src/port/snprintf.c Sat Mar 12 01:08:30 2005 *** *** 195,200 --- 195,202 int pointflag; char func; int realpos; + int longflag; + int longlongflag; } *fmtpar, **fmtparptr; /* Create enough structures to hold all arguments */ *** *** 263,274 --- 265,278 realpos = position; len = 0; goto nextch; + /* case '*': if (pointflag) maxwidth = va_arg(args, int); else len = va_arg(args, int); goto nextch; + */ case '.': pointflag = 1; goto nextch; *** *** 300,315 #endif case 'u': case 'U': ! /* fmtnum(value,base,dosign,ljust,len,zpad,output) */ ! if (longflag) ! { ! if (longlongflag) ! value = va_arg(args, uint64); ! else ! value = va_arg(args, unsigned long); ! } ! else ! value = va_arg(args, unsigned int); fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; --- 304,311 #endif case 'u': case 'U': ! fmtpar[fmtpos].longflag = longflag; ! fmtpar[fmtpos].longlongflag = longlongflag; fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; *** *** 324,339 break; case 'o': case 'O': ! /* fmtnum(value,base,dosign,ljust,len,zpad,output) */ ! if (longflag) ! { ! if (longlongflag) ! value = va_arg(args, uint64); ! else ! value = va_arg(args, unsigned long); ! } ! else ! value = va_arg(args, unsigned int); fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; --- 320,327 break; case 'o': case 'O': ! fmtpar[fmtpos].longflag = longflag; ! fmtpar[fmtpos].longlongflag = longlongflag; fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; *** *** 348,364 break; case 'd': case 'D': ! if (longflag) ! { ! if (longlongflag) ! { ! value = va_arg(args, int64); ! } ! else ! value = va_arg(args, long); ! } ! else ! value = va_arg(args, int); fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; --- 336,343 break; case 'd': case 'D': ! fmtpar[fmtpos].longflag = longflag; ! fmtpar[fmtpos].longlongflag = longlongflag; fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; *** *** 372,386 fmtpos++; break; case 'x': ! if (longflag) ! { ! if (longlongflag) ! value = va_arg(args, uint64); ! else ! value = va_arg(args, unsigned long); ! } ! else ! value = va_arg(args, unsigned int); fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; --- 351,358 fmtpos++; break; case 'x': ! fmtpar[fmtpos].longflag = longflag; ! fmtpar[fmtpos].longlongflag = longlongflag; fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; *** *** 394,408 fmtpos++; break; case 'X': ! if (longflag) ! { ! if (longlongflag) ! value = va_arg(args, uint64); ! else ! value = va_arg(args, unsigned long); ! } ! else ! value = va_arg(args, unsigned int); fmtpar[fmtpos].fmtbegin = fmtbegin; fmtpar[fmtpos].fmtend = format; fmtpar[fmtpos].numvalue = value; --- 366,373 fmtpos++;
[HACKERS] partial vacuum
Hi all, (B (BI'm thinking about "partial (or range) vacuum" feature. (B (BAs you know, vacuum process scans and re-organizes a whole table, (Bso huge cpu load will be generated when vacuuming a large table, (Band it will take long time (in some cases, it may take 10 minutes (Bor more). (B (BHuge vacuum process hits a system performance. (BOtherwise, dead tuples also hit the performance. (B (BSo, I imagine if the vacuum process can be done partially, (Bthe huge vacuum load can be parted, and the performance penalty (Bof the vacuum can be reduced(parted). (B (B"partial (or range) vacuum" means vacuuming a part of the table. (B (BFor example, if you have 10 Gbytes table, (Byou can execute vacuum partially, 10 times, 1 Gbytes each. (B (BAttached patch extends vacuum syntax and lazy_scan_heap() function. (BBackend can process the partial vacuum command as below: (B (Bpsql$ vacuum table1 (0, 100); (B (BIn the above command, "0" means start block number, (Band "100" means end block number of the vacuum scan. (B (BAttached image contains three graphs generated with pgstatpage() (Bfunction (also attached). (B (B1.) distribution of freespace of the "tellers" table after pgbench. (B2.) after partial vacuum, between 200 block and 400 block. (B3.) after pgbench running again. (B (B(X-axis: block number, Y-axis: freespace size of a page) (B (BI think the partial vacuum and intelligent pg_autovacuum (Bmakes postgres backend near to vacuum-less. (B (BIs this interesting? Any comments? (B-- (BNAGAYASU Satoshi [EMAIL PROTECTED] (BOpenSource Development Center, (BNTT DATA Corp. http://www.nttdata.co.jpdiff -ru postgresql-7.4.6.orig/src/backend/commands/vacuum.c postgresql-7.4.6/src/backend/commands/vacuum.c --- postgresql-7.4.6.orig/src/backend/commands/vacuum.c 2003-10-03 08:19:44.0 +0900 +++ postgresql-7.4.6/src/backend/commands/vacuum.c 2005-02-28 20:19:55.0 +0900 @@ -743,6 +743,12 @@ Oid toast_relid; boolresult; + if ( vacstmt-range ) + elog(NOTICE, vacuum_rel(): range %ld...%ld, +vacstmt-range-start, vacstmt-range-end); + else + elog(NOTICE, vacuum_rel(): no range.); + /* Begin a transaction for vacuuming this relation */ StartTransactionCommand(); SetQuerySnapshot(); /* might be needed for functions in diff -ru postgresql-7.4.6.orig/src/backend/commands/vacuumlazy.c postgresql-7.4.6/src/backend/commands/vacuumlazy.c --- postgresql-7.4.6.orig/src/backend/commands/vacuumlazy.c 2003-09-25 15:57:59.0 +0900 +++ postgresql-7.4.6/src/backend/commands/vacuumlazy.c 2005-02-28 20:26:08.0 +0900 @@ -91,7 +91,8 @@ /* non-export function prototypes */ static void lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, - Relation *Irel, int nindexes); + Relation *Irel, int nindexes, + BlockNumber startBlock, BlockNumber endBlock); static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats); static void lazy_scan_index(Relation indrel, LVRelStats *vacrelstats); static void lazy_vacuum_index(Relation indrel, LVRelStats *vacrelstats); @@ -149,7 +150,12 @@ hasindex = (nindexes 0); /* Do the vacuuming */ - lazy_scan_heap(onerel, vacrelstats, Irel, nindexes); + if ( vacstmt-range ) + lazy_scan_heap(onerel, vacrelstats, Irel, nindexes, + vacstmt-range-start, vacstmt-range-end); + else + lazy_scan_heap(onerel, vacrelstats, Irel, nindexes, + 0, RelationGetNumberOfBlocks(onerel)); /* Done with indexes */ vac_close_indexes(nindexes, Irel); @@ -184,7 +190,8 @@ */ static void lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, - Relation *Irel, int nindexes) + Relation *Irel, int nindexes, + BlockNumber startBlock, BlockNumber endBlock) { BlockNumber nblocks, blkno; @@ -209,13 +216,20 @@ empty_pages = 0; num_tuples = tups_vacuumed = nkeep = nunused = 0; - nblocks = RelationGetNumberOfBlocks(onerel); +// nblocks = RelationGetNumberOfBlocks(onerel); + if ( endBlock RelationGetNumberOfBlocks(onerel) ) + nblocks = endBlock; + else + nblocks = RelationGetNumberOfBlocks(onerel); + vacrelstats-rel_pages = nblocks; vacrelstats-nonempty_pages = 0; lazy_space_alloc(vacrelstats, nblocks); - for (blkno = 0; blkno nblocks; blkno++) + elog(NOTICE, lazy_vacuum_heap: range %d...%d, startBlock, nblocks); + + for (blkno = startBlock; blkno nblocks; blkno++) { Buffer buf;
Re: [HACKERS] partial vacuum
Satoshi Nagayasu [EMAIL PROTECTED] writes: Attached patch extends vacuum syntax and lazy_scan_heap() function. Backend can process the partial vacuum command as below: psql$ vacuum table1 (0, 100); In the above command, 0 means start block number, and 100 means end block number of the vacuum scan. I think the major problem with this is the (untenable) assumption that the user is keeping track of the table size accurately. It'd be very likely that portions of the table get missed if someone tries to maintain a table using only partial vacuums specified in this way. I thought about specifying the range using percentages instead of raw block numbers, but that's got equally bad problems of its own. (If the table size changes, then successive vacuums from 0-10 and 10-20% could miss a few blocks in between.) More generally, any sort of partial vacuum operation is going to be inherently inefficient because of excessive index scanning --- if you chop the table into tenths, say, you are probably doing five or so extra index scans to complete the operation, because of scans forced with only partially full vacuum memory. Unless you want to redesign the way index cleanup is done, you won't be able to use this feature with a scan size small enough that it really makes a meaningful reduction in the system load produced by a vacuum. Have you looked at the vacuum cost delay features present in 8.0? On the whole that seems like a better solution for reducing the impact of routine vacuuming than trying to manage partial vacuuming with an approach like this. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] partial vacuum
(BTom Lane wrote: (B I think the major problem with this is the (untenable) assumption that (B the user is keeping track of the table size accurately. It'd be very (B likely that portions of the table get missed if someone tries to (B maintain a table using only partial vacuums specified in this way. (B (B I thought about specifying the range using percentages instead of raw (B block numbers, but that's got equally bad problems of its own. (If (B the table size changes, then successive vacuums from 0-10 and 10-20% (B could miss a few blocks in between.) (B (BYes. If the user want to use partial vacuum efficiently, (Bpartial vacuum will need some supporting tools to keep track (Bof the table size changes. (BFor example, (intelligent) pg_autovacuum or something like that. (B (B More generally, any sort of partial vacuum operation is going to be (B inherently inefficient because of excessive index scanning --- if (B you chop the table into tenths, say, you are probably doing five or (B so extra index scans to complete the operation, because of scans forced (B with only partially full vacuum memory. Unless you want to redesign (B the way index cleanup is done, you won't be able to use this feature (B with a scan size small enough that it really makes a meaningful (B reduction in the system load produced by a vacuum. (B (BYes. It's a difficult point for me. (BNow, I have no idea to handle indexes in partial vacuum, (Bbut I want to find the answer... (B (B Have you looked at the vacuum cost delay features present in 8.0? (B On the whole that seems like a better solution for reducing the impact (B of routine vacuuming than trying to manage partial vacuuming with an (B approach like this. (B (BI've not tried yet. (B (BI guess the postgres is going to have a background process to reduce (Bthe vacuum impact in the system load, as we got the bgwriter on (Bcheckpoint handling. Right? (B (BThanks for comments. (B (B-- (BNAGAYASU Satoshi [EMAIL PROTECTED] (BOpenSource Development Center, (BNTT DATA Corp. http://www.nttdata.co.jp (B (B (B---(end of broadcast)--- (BTIP 3: if posting/reading through Usenet, please send an appropriate (B subscribe-nomail command to [EMAIL PROTECTED] so that your (B message can get through to the mailing list cleanly
Re: [HACKERS] partial vacuum
Have you looked at the vacuum cost delay features present in 8.0? On the whole that seems like a better solution for reducing the impact of routine vacuuming than trying to manage partial vacuuming with an approach like this. IMO vacuum cost delay seems not to be a solution. To keep long running system's performance steady, we need to avoid table/index bloat(I assume incoming trasanction rate is constant). Surely vacuum delay reduces the impact, but the cost is taking longer time to salvage free spaces, and FMS will run out due to incoming transactions, no? -- Tatsuo Ishii ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Raw size
Is there any compression or what? Yes, there is: http://www.postgresql.org/docs/8.0/interactive/storage-toast.html thanks, is there any way to increase the limit, upper wich toast strategy is selected? By defaullt is Block_size/4 = about 2000 Bytes. Dunno, but you can alter the column and go 'set storage' to control the storage strategy for TOAST on the column... Chris ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] cvsup binaries?
The PostgreSQL docs suggest that ftp.postgresql.org holds binary builds of CVSup: http://developer.postgresql.org/docs/postgres/cvsup.html Is this still the case? (I couldn't see any cvsup binaries, but perhaps they are well-hidden). -Neil ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] cvsup binaries?
Neil Conway [EMAIL PROTECTED] writes: The PostgreSQL docs suggest that ftp.postgresql.org holds binary builds of CVSup: Is this still the case? (I couldn't see any cvsup binaries, but perhaps they are well-hidden). If they are still there, they're probably exceedingly out of date :-( regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster