Re: [HACKERS] MultiXacts & WAL
> May be this is needed to support savepoints/subtransactions? Or is it > something else that i am missing?It's for two-phase commit. A prepared transaction can hold locks that need to be recovered.When a transaction enters (successfully) the prepared state it only retains its exclusive locks and releases any shared locks (i.e. multixacts)... or, at least, that's how it should be in principle according to serializiaton theory, i haven't yet checked out if this is what is done in postgresql . Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
Re: [HACKERS] MultiXacts & WAL
On Sat, 17 Jun 2006, paolo romano wrote: May be this is needed to support savepoints/subtransactions? Or is it something else that i am missing? It's for two-phase commit. A prepared transaction can hold locks that need to be recovered. - Heikki ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Preventing DELETE and UPDATE without a WHERE clause?
On 6/16/06, Mark Woodward <[EMAIL PROTECTED]> wrote: > Chris Campbell <[EMAIL PROTECTED]> writes: >> I heard an interesting feature request today: preventing the >> execution of a DELETE or UPDATE query that does not have a WHERE clause. > > These syntaxes are required by the SQL spec. Furthermore, it's easy > to imagine far-more-probable cases in which the system wouldn't detect > that you'd made a mistake, eg > > DELETE FROM tab WHERE key > 1 > > where you meant to type > > DELETE FROM tab WHERE key > 1000 > > I suggest counseling your client to learn how to use BEGIN/ROLLBACK. > This proposal strikes me as falling squarely within the rule about > "design a system that even a fool can use, and only a fool will want > to use it". > Just a theory, couldn't a trigger be set up that would case the query to tank if it touches too many rows? i haven't tried but maybe a FOR STATEMENT trigger AFTER the event can ask ROW_COUNT using GET DIAGNOSTICS? -- regards, Jaime Casanova "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning." Richard Cook ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL community
Arjen van der Meijden wrote: I can already confirm very good scalability (with our workload) on postgresql on that machine. We've been testing a 32thread/16G-version and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores (with all four threads enabled). The threads are a bit less scalable, but still pretty good. Enabling 1, 2 or 4 threads for each core yields resp 60 and 130% extra performance. Wow, what type of workload is it? And did you do much tuning to get near-linear scalability to 32 threads? Regards, -Robert ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] further meta-data in pg_stat_activity?
"Andrew Hammond" <[EMAIL PROTECTED]> writes: > How much work would it be to implement and how valuable would people > find the following additions to pg_stat_activity? I won't speak to the "how valuable" bit, but as far as costs go, I think: > 1) # of transactions committed on this connection since start > 2) # of transactions rolled back Trivial, we report these to the stats collector already, they're just not summed in this particular fashion. > 3) milliseconds used processing requests > 4) milliseconds idle in transaction > 5) milliseconds idle All moderately expensive, we're talking at least two additional kernel calls per request to get the information. > 6) this is the n'th backend spawned since the postmaster started Cheap on Unix, not so cheap on Windows, usefulness pretty questionable. > 7) this is the n'th backend for the given client_addr > 8) this is the n'th backend for the given user Both *exceedingly* expensive --- where are you going to sum these? The postmaster does not even have a way to count the second, because it forks off the subprocess before receiving the connection request packet which contains the user name. > 9) timestamp for start of the current transaction (null if idle?) Don't we do that already? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] further meta-data in pg_stat_activity?
How much work would it be to implement and how valuable would people find the following additions to pg_stat_activity? 1) # of transactions committed on this connection since start 2) # of transactions rolled back 3) milliseconds used processing requests 4) milliseconds idle in transaction 5) milliseconds idle 6) this is the n'th backend spawned since the postmaster started 7) this is the n'th backend for the given client_addr 8) this is the n'th backend for the given user 9) timestamp for start of the current transaction (null if idle?) Drew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] postgresql and process titles
On Fri, Jun 16, 2006 at 07:56:30AM -0400, Rod Taylor wrote: > > > I did have dbt2 pretty close to functional on FreeBSD a year ago but > > > it's probably gone back into linuxisms since then. > > > > :( > > > > I won't have the chance to work on this further for another 2 months, > > but if you have patches I could see about picking up on them when I > > get back. > > Everything has been applied to the dbt2 tree. Cool! Kris pgpv6XcmPo9Br.pgp Description: PGP signature
Re: [HACKERS] postgresql and process titles
On Thu, Jun 15, 2006 at 11:34:52PM -0400, Rod Taylor wrote: > On Tue, 2006-06-13 at 14:18 -0400, Kris Kennaway wrote: > > On Tue, Jun 13, 2006 at 12:29:14PM -0500, Jim C. Nasby wrote: > > > > > Unless supersmack has improved substantially, you're unlikely to find > > > much interest. Last I heard it was a pretty brain-dead benchmark. DBT2/3 > > > (http://sourceforge.net/projects/osdldbt) is much more realistic (based > > > on TPC-C and TPC-H). > > > > Have you tried to compile this on FreeBSD? It looks like it (dbt1 at > > least) will need a moderate amount of hacking - there are some Linux > > assumptions in the source and the configure script makes assumptions > > about where things are installed that cannot be overridden on the > > commandline. > > I did have dbt2 pretty close to functional on FreeBSD a year ago but > it's probably gone back into linuxisms since then. :( I won't have the chance to work on this further for another 2 months, but if you have patches I could see about picking up on them when I get back. Kris pgpseP2tfCg1z.pgp Description: PGP signature
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL
> Tatsuo Ishii <[EMAIL PROTECTED]> writes: > > Interesting. We (some Japanese companies including SRA OSS, > > Inc. Japan) did some PG scalability testing using a Unisys's big 16 > > (physical) CPU machine and found PG scales up to 8 CPUs. However > > beyond 8 CPU PG does not scale anymore. The result can be viewed at > > "OSS iPedia" web site (http://ossipedia.ipa.go.jp). Our conclusion was > > PG has a serious lock contention problem in the environment by > > analyzing the oprofile result. > > 18% in s_lock is definitely bad :-(. Were you able to determine which > LWLock(s) are accounting for the contention? Yes. We were interested in that too. Some people did addtional tests to determin that. I don't have the report handy now. I will report back next week. > The test case seems to be spending a remarkable amount of time in LIKE > comparisons, too. That probably is not a representative condition. I know. I think point is 18% in s_lock only appears with 12 CPUs or more. -- Tatsuo Ishii SRA OSS, Inc. Japan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Exporting type OID macros in a cleaner fashion
This thread: http://archives.postgresql.org/pgsql-ports/2006-06/msg5.php points up that if a client program would like to refer to datatype OIDs by means of the pg_type.h macros (ie, write "INT4OID" and not "23"), it's currently forced to include postgres.h, which is not a good idea for a number of reasons. We intend postgres.h for use by backend-side code; frontend code ought to use postgres_fe.h, and indeed a plain libpq client shouldn't be including either. There's way too much pollution of the client namespace from either file. I ran into this same problem last week when I wanted to clean up some hard-coded OIDs in psql's print.c. I did the same thing the Qt people did, ie, #include postgres.h, but it's a really ugly hack. I think it'd be a good idea to fix things so that the type OID macros are available to frontend code without having to break any rules about what to include. I looked at whether we could sanitize catalog/pg_type.h so it could be used in a frontend environment, but there are a couple obstacles there: * We'd have to move the extern declarations for catalog/pg_type.c somewhere else, and there's not an obvious candidate. * There are still the BKI macros (CATALOG, DATA, etc) which really should not be exported to client code ... DATA in particular seems like a pretty obtrusive choice of macro name. The alternative I'm currently thinking about is to build and install an auto-generated file comparable to fmgroids.h, containing *only* the type OID macro #defines extracted from pg_type.h. This would require just a trivial amount of sed hacking. I'm not entirely clear where to install such a thing though. The fmgroids.h precedent suggests server/utils/fmgroids.h, but if this is intended for client-side use it shouldn't go under /server. I'm tempted to install it as "pgtypeoids.h" at the top level of the installation include directory ... but then I'm not clear which source directory ought to generate it. Thoughts, objections, better ideas? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL
Tatsuo Ishii <[EMAIL PROTECTED]> writes: > Interesting. We (some Japanese companies including SRA OSS, > Inc. Japan) did some PG scalability testing using a Unisys's big 16 > (physical) CPU machine and found PG scales up to 8 CPUs. However > beyond 8 CPU PG does not scale anymore. The result can be viewed at > "OSS iPedia" web site (http://ossipedia.ipa.go.jp). Our conclusion was > PG has a serious lock contention problem in the environment by > analyzing the oprofile result. 18% in s_lock is definitely bad :-(. Were you able to determine which LWLock(s) are accounting for the contention? The test case seems to be spending a remarkable amount of time in LIKE comparisons, too. That probably is not a representative condition. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Curious bug in buildfarm files-changed links
Andrew Dunstan <[EMAIL PROTECTED]> writes: > I think the simplest fix would be just to require a / before the > filename capture: That sounds like it would work. I was thinking you might need a minimal-munch instead of maximal-munch quantifier before the pgsql pattern, but that's not a good idea because someone might well be running the buildfarm in a path that includes "pgsql". We'll just have to be sure not to use bare "pgsql" as a subdirectory name anywhere in the CVS tree. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL
> I am thrill to inform you all that Sun has just donated a fully loaded > T2000 system to the PostgreSQL community, and it's being setup by Corey > Shields at OSL (osuosl.org) and should be online probably early next > week. The system has > > * 8 cores, 4 hw threads/core @ 1.2 GHz. Solaris sees the system as > having 32 virtual CPUs, and each can be enabled or disabled individually > * 32 GB of DDR2 SDRAM memory > * 2 @ 73GB internal SAS drives (1 RPM) > * 4 Gigabit ethernet ports > > For complete spec, visit > http://www.sun.com/servers/coolthreads/t2000/specifications.jsp > > I think this system is well suited for PG scalability testing, among > others. We did an informal test using an internal OLTP benchmark and > noticed that PG can scale to around 8 CPUs. Would be really cool if all > 32 virtual CPUs can be utilized!!! Interesting. We (some Japanese companies including SRA OSS, Inc. Japan) did some PG scalability testing using a Unisys's big 16 (physical) CPU machine and found PG scales up to 8 CPUs. However beyond 8 CPU PG does not scale anymore. The result can be viewed at "OSS iPedia" web site (http://ossipedia.ipa.go.jp). Our conclusion was PG has a serious lock contention problem in the environment by analyzing the oprofile result. You can take a look at the detailed report at: http://ossipedia.ipa.go.jp/capacity/EV0604210111/ (unfortunately only Japanese contents is available at the moment. Please use some automatic translation services) Evalution environment was: PostgreSQL 8.1.2 OSDL DBT-1 2.1 Miracle Linux 4.0 Unisys ES700 Xeon 2.8GHz CPU x 16 Mem 16GB(HT off) -- Tatsuo Ishii SRA OSS, Inc. Japan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Curious bug in buildfarm files-changed links
Tom Lane wrote: Observe the list of "Files changed this run" at http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=gazelle&dt=2006-06-17%2000:30:01 The links to the core-system files are OK but the plpgsql files are not correct, eg pgsql/src/gram.y 1.93 where it should be pgsql/src/pl/plpgsql/src/gram.y 1.93 I'm betting on an overly-greedy regex ... Good catch. As usual you are spot on. The offending line is this one, which is parsing CVS status output (if you don't read perl just close your eyes): s!.*Repository revision:.(\d+(\.\d+)+).*(pgsql/.*),v.*!$3 $1!; I think the simplest fix would be just to require a / before the filename capture: s!.*Repository revision:.(\d+(\.\d+)+).*/(pgsql/.*),v.*!$3 $1!; cheers andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Curious bug in buildfarm files-changed links
Observe the list of "Files changed this run" at http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=gazelle&dt=2006-06-17%2000:30:01 The links to the core-system files are OK but the plpgsql files are not correct, eg pgsql/src/gram.y 1.93 where it should be pgsql/src/pl/plpgsql/src/gram.y 1.93 I'm betting on an overly-greedy regex ... regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]
Bruce Momjian writes: > Volkan YAZICI wrote: >> The problem is, AFAICS, it's not possible to distinguish between a tuple >> returning query (T, ..., C, Z or T, E) and a description of a portal (T, >> Z). Therefore, I've created a global flag (parsing_row_desc) which is >> turned on when we receive a 'T' and turned off if we receive a 'C' or >> 'E'. It's a kind of ugly method but the only solution I could come up >> with. > The problem with this solution is that it is not thread-safe. Perhaps > you can use a per-PGconn boolean? The whole thing sounds like brute force to me. Shouldn't you be adding states to enum PGQueryClass, if you need to track what sort of Describe you're doing? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] MultiXacts & WAL
paolo romano <[EMAIL PROTECTED]> writes: > The point i am missing is the need to be able to completely recover > multixacts offsets and members data. These carry information about > current transactions holding shared locks on db tuples, which should > not be essential for recovery purposes. This might be optimizable if we want to assume that multixacts will never be used for any purpose except holding locks, but that seems a bit short sighted. Is there any actually significant advantage to not logging this information? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] MultiXacts & WAL
I am working on a possible extension of postgresql mvcc to support very timely failure masking in the context of three-tier applications so i am currently studying Postgresql internals...I am wondering what are the reasons why both the MultiXactIds and the corresponding OFFSETs and MEMBERs are currently persisted.In multixact.c 's documentation on the top of the file you can find the following statement:"...This allows us to completely rebuild the data entered since the last checkpoint during XLOG replay..."I can see the need to persist (not eagerly) multixactids to avoid wraparounds. Essentially, mass storage is used to extend the limited capabity of slrus data structures in shared memory.The point i am missing is the need to be able to completely recover multixacts offsets and members data. These carry information about current transactions holding shared locks on db tuples, which should not be essential for recovery purposes. After a crash you want to recover the content of your data, not the presence of shared locks on any tuple. AFAICS, this seems true for both committed/aborted transactions (which being concluded do not care any more about the fact that they could have held any shared lock), as well as prepared transactions (which only need to recover their exclusive locks).I have tried to dig around the comments within the main multixact.c functions and i have walked through this comment (CreateMultiXactId())):"...The only way for the MXID to be referenced from any data page is for heap_lock_tuple() to have put it there, and heap_lock_tuple() generates an XLOG record that must follow ours... "But still I cannot see the need to recover complete shared locks info (i.e. not only multixactids but also the corresponding registered transactionids that were holding the lock)...May be this is needed to support savepoints/subtransactions? Or is it something else that i am missing?Thanks for your precious help! Paolo Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com
[HACKERS] Register for an SPI Contributing Membership
All, As many of you know, PostgreSQL is now fundraising via Software in the Public Interest. One of the things this means is that PostgreSQL contributors are eligible for SPI Contributing Membership, which allows you to vote in SPI elections. If your name is listed here: http://www.postgresql.org/developer/bios ... you're automatically eligible. (if your name should be listed there but isn't, take it up with pgsql-www) You can apply here: https://members.spi-inc.org/newnm.php Note that applying for membership is a two-step process. You need to get a regular membership, then apply for a Contributing Membership. The latter is required for you to vote in SPI elections. There is an SPI election on July 20th. I'm asking PostgreSQL people to register, since I'm running for SPI Board in order to keep a closer eye on PostgreSQL's money. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL community
Arjen, > I can already confirm very good scalability (with our workload) on > postgresql on that machine. We've been testing a 32thread/16G-version > and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores > (with all four threads enabled). Keen. We're trying to keep the linear scaling going up to 32 cores of course (which doesn't happen, presently). Would you be interested in helping us troubleshoot some of the performance issues? -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if
Bruce Momjian writes: > Tom Lane wrote: >> Actually ... does Oracle have SQLSTATEs assigned to these errors? >> If so, maybe we should use theirs. I had the idea they were still >> stuck on non-spec-compatible error numbers, though. > Anyone? I did some googling and found one page that says no_data_found is ORA-01403 and too_many_rows is ORA-01422. Another page said that ORA-01403 maps to SQLSTATE 02000 (which is exactly what we need to avoid to be spec-compliant) and they don't give a mapping at all for ORA-01422 :-(. So once again, Oracle is totally useless as a precedent for SQLSTATE choices. I'll go with a couple of codes in the P0 class for the moment. If anyone has a better idea, it won't be hard to change. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL community
On 16-6-2006 17:18, Robert Lor wrote: I think this system is well suited for PG scalability testing, among others. We did an informal test using an internal OLTP benchmark and noticed that PG can scale to around 8 CPUs. Would be really cool if all 32 virtual CPUs can be utilized!!! I can already confirm very good scalability (with our workload) on postgresql on that machine. We've been testing a 32thread/16G-version and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores (with all four threads enabled). The threads are a bit less scalable, but still pretty good. Enabling 1, 2 or 4 threads for each core yields resp 60 and 130% extra performance. Best regards, Arjen ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions
Tom Lane wrote: > Bruce Momjian writes: > > If we go with that how does the caller check between not found and too > > many? And if we go with Oracle names, I need different codes to match > > with the two Oracle names. > > I think we should just go with two new codes and use the Oracle names > for them. One remaining question: shall we assign codes in class 21 > (Cardinality Violation) or class P0 (PL/pgSQL Error)? If you think > these are likely to be used in other places then class 21 seems > reasonable, but if we are thinking of them as being Oracle compatibility > hacks then I'd lean to class P0. Oracle-only, I would think, but I am no Oracle expert (never used it, actually (a badge of honor?)). > Actually ... does Oracle have SQLSTATEs assigned to these errors? > If so, maybe we should use theirs. I had the idea they were still > stuck on non-spec-compatible error numbers, though. Anyone? -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if
Bruce Momjian writes: > If we go with that how does the caller check between not found and too > many? And if we go with Oracle names, I need different codes to match > with the two Oracle names. I think we should just go with two new codes and use the Oracle names for them. One remaining question: shall we assign codes in class 21 (Cardinality Violation) or class P0 (PL/pgSQL Error)? If you think these are likely to be used in other places then class 21 seems reasonable, but if we are thinking of them as being Oracle compatibility hacks then I'd lean to class P0. Actually ... does Oracle have SQLSTATEs assigned to these errors? If so, maybe we should use theirs. I had the idea they were still stuck on non-spec-compatible error numbers, though. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] bug? non working casts for domain
Added to TODO list, with URL. --- Fabien COELHO wrote: > > Dear PostgreSQL developer. > > Although it is allowed to create a cast for a domain, it seems that there > is no way to trigger it. You can find attached an sql script to illustrate > the issue with postgresql 8.1.3. The create cast and create domain > documentations do not seem to discuss this point. > > ISTM that it is a pg bug. Indeed, either > > (1) the create cast should be rejected if it is not allowed for domains. > > or > > (2) the function should be triggered by explicit casts to the domain. > > Have a nice day, > > -- > Fabien. Content-Description: [ Attachment, skipping... ] > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> Plan B is to invent new errcodes to match the Oracle spellings. If > >> that's what we want to do, it's not that hard. > > > We could use: > > #define ERRCODE_DATA_EXCEPTION MAKE_SQLSTATE('2','2', > > or > > #define ERRCODE_ERROR_IN_ASSIGNMENT MAKE_SQLSTATE('2','2', > > Those are both mighty generic (in fact DATA_EXCEPTION is a class code > not a specific error). If we want to stick to existing errcodes I think > CARDINALITY_VIOLATION is the only reasonable choice. If we go with that how does the caller check between not found and too many? And if we go with Oracle names, I need different codes to match with the two Oracle names. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if
Bruce Momjian writes: > Tom Lane wrote: >> Plan B is to invent new errcodes to match the Oracle spellings. If >> that's what we want to do, it's not that hard. > We could use: > #define ERRCODE_DATA_EXCEPTION MAKE_SQLSTATE('2','2', > or > #define ERRCODE_ERROR_IN_ASSIGNMENT MAKE_SQLSTATE('2','2', Those are both mighty generic (in fact DATA_EXCEPTION is a class code not a specific error). If we want to stick to existing errcodes I think CARDINALITY_VIOLATION is the only reasonable choice. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pg_dump -Ft failed on Windows XP
Someday we can move this to /port, but for now, let's get it into CVS. Your patch has been added to the PostgreSQL unapplied patches list at: http://momjian.postgresql.org/cgi-bin/pgpatches It will be applied as soon as one of the PostgreSQL committers reviews and approves it. --- Zeugswetter Andreas DCP SD wrote: > > > >> Apparently it won't work at all if TMP isn't set? > > > > > I'm not *too* concerned about that, since TMP is normally set by the > OS > > > itself. There's one set in the "system environment" (to > c:\windows\temp > > > or whatrever) and then it's overridden by one set by the OS when it > > > loads a user profile. > > > > OK, then maybe not having it would be equivalent to /tmp-not-writable > > on Unix, ie, admin error. > > > > > Also to the point, what would you fall back to? > > > > Current directory maybe? > > It tries \ (tested on Win 2000), if the dir argument is NULL and TMP is > not set. > But TMP is usually set. > > Attached is a working version not yet adapted to port/. > - memoryleak fixed > - use _tmpname and _fdopen not the compatibility tmpname and fdopen > (imho only cosmetic) > - EACCES fixed (Win2000 needs _S_IREAD | _S_IWRITE or fails with EACCES, > even as Admin) > - I suggest adding a prefix pg_temp_ (for leftover temp files after > crash, > the name I get is then usually pg_temp_2) > > Andreas Content-Description: pg_dump_tempfile.patch.txt [ Attachment, skipping... ] > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> Do we actually need different error codes for too few and too many rows? > >> It looks to me like the only relevant standard error condition is > >> CARDINALITY_VIOLATION, so either we throw CARDINALITY_VIOLATION for both > >> cases or we invent nonstandard codes. > > > We could, and then suggest using ROW_COUNT to determine if there were > > too few rows, or too many. > > SELECT INTO doesn't set ROW_COUNT ... but if we change the code to set > FOUND before throwing the error, it'd work to tell people to check > FOUND. > > (Thinks a bit...) Actually not, because if the exception catcher isn't > in the same function as the SELECT INTO, it'll be looking at the wrong > FOUND variable. ROW_COUNT same problem, even if we were setting it. > > Plan B is to invent new errcodes to match the Oracle spellings. If > that's what we want to do, it's not that hard. We could use: #define ERRCODE_DATA_EXCEPTION MAKE_SQLSTATE('2','2', or #define ERRCODE_ERROR_IN_ASSIGNMENT MAKE_SQLSTATE('2','2', -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if
Bruce Momjian writes: > Tom Lane wrote: >> Do we actually need different error codes for too few and too many rows? >> It looks to me like the only relevant standard error condition is >> CARDINALITY_VIOLATION, so either we throw CARDINALITY_VIOLATION for both >> cases or we invent nonstandard codes. > We could, and then suggest using ROW_COUNT to determine if there were > too few rows, or too many. SELECT INTO doesn't set ROW_COUNT ... but if we change the code to set FOUND before throwing the error, it'd work to tell people to check FOUND. (Thinks a bit...) Actually not, because if the exception catcher isn't in the same function as the SELECT INTO, it'll be looking at the wrong FOUND variable. ROW_COUNT same problem, even if we were setting it. Plan B is to invent new errcodes to match the Oracle spellings. If that's what we want to do, it's not that hard. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] libpq Describe Extension [WAS: Bytea and perl]
Volkan YAZICI wrote: > To mention about the followed implementation, it needed some hack on > pqParseInput3() code to make it understand if a received message is > a reponse to a Describe ('D') query or to another tuple returning > query. To summarize problem, there're two possible forms of a 'D' > response: > > 1. Description of a prepared statement: t, T, Z > 2. Description of a portal: T, Z > > The problem is, AFAICS, it's not possible to distinguish between a tuple > returning query (T, ..., C, Z or T, E) and a description of a portal (T, > Z). Therefore, I've created a global flag (parsing_row_desc) which is > turned on when we receive a 'T' and turned off if we receive a 'C' or > 'E'. It's a kind of ugly method but the only solution I could come up > with. The problem with this solution is that it is not thread-safe. Perhaps you can use a per-PGconn boolean? -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions
Tom Lane wrote: > [EMAIL PROTECTED] (Bruce Momjian) writes: > > Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if more or > > less than one row is returned by the SELECT, for Oracle PL/SQL > > compatibility. > > I've got a couple of problems with the error codes used by this patch. > In the first place, you can't arbitrarily assign names to error > conditions that are different from the standard spelling (see > errcodes.sgml for why not: the standard spellings are what are > documented). In the second place, the spec clearly says that class 02 I saw this at the top of plerrcodes.h: * Eventually this header file should be auto-generated from errcodes.h * with some sort of sed hackery, but no time for that now. It's likely * that an exact mapping will not be what's wanted anyhow ... so I figured we were supposed to map them. > is warning conditions, not errors, so using ERRCODE_NO_DATA for an error > is inappropriate. Oh, I see that now: /* Class 02 - No Data --- this is also a warning class per SQL99 */ /* (do not use this class for failure conditions!) */ #define ERRCODE_NO_DATA MAKE_SQLSTATE('0','2', '0','0','0') > Where did you get those names from ... were they picked out of the air, > or were they intended to be Oracle-compatible, or what? Surely we I pulled this from the Oracle documentation that I quoted earlier: > > When you use a SELECT INTO statement without the BULK COLLECT clause, it > > should return only one row. If it returns more than one row, PL/SQL > > raises the predefined exception TOO_MANY_ROWS. > > > > However, if no rows are returned, PL/SQL raises NO_DATA_FOUND unless the > > SELECT statement called a SQL aggregate function such as AVG or SUM. > > (SQL aggregate functions always return a value or a null. So, a SELECT > > INTO statement that calls an aggregate function never raises > > NO_DATA_FOUND.) Are those both errors in Oracle? I assumed so. > aren't trying to be Oracle-compatible at that level of detail (else > we've doubtless got a huge number of other cases where we throw a > different error than they do). I thought it was nice to get as close as possible, but using a warning code is clearly bad. > Do we actually need different error codes for too few and too many rows? > It looks to me like the only relevant standard error condition is > CARDINALITY_VIOLATION, so either we throw CARDINALITY_VIOLATION for both > cases or we invent nonstandard codes. We could, and then suggest using ROW_COUNT to determine if there were too few rows, or too many. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [COMMITTERS] pgsql: Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if
[EMAIL PROTECTED] (Bruce Momjian) writes: > Add STRICT to PL/pgSQL SELECT INTO, so exceptions are thrown if more or > less than one row is returned by the SELECT, for Oracle PL/SQL > compatibility. I've got a couple of problems with the error codes used by this patch. In the first place, you can't arbitrarily assign names to error conditions that are different from the standard spelling (see errcodes.sgml for why not: the standard spellings are what are documented). In the second place, the spec clearly says that class 02 is warning conditions, not errors, so using ERRCODE_NO_DATA for an error is inappropriate. Where did you get those names from ... were they picked out of the air, or were they intended to be Oracle-compatible, or what? Surely we aren't trying to be Oracle-compatible at that level of detail (else we've doubtless got a huge number of other cases where we throw a different error than they do). Do we actually need different error codes for too few and too many rows? It looks to me like the only relevant standard error condition is CARDINALITY_VIOLATION, so either we throw CARDINALITY_VIOLATION for both cases or we invent nonstandard codes. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Proposal for updatable views
URL added to TODO. I assume there has been no more progress on this patch. --- Bernd Helmle wrote: > Hi folks, > > Please find attached a patch that implements SQL92-compatible updatable > views. The patch introduces new semantics into the rule system: implicit > and explicit rules. Implicit rules are created to implement updatable views: > > _INSERT > _NOTHING_INSERT (unconditional DO INSTEAD NOTHING rule) > _DELETE > _NOTHING_DELETE (unconditional DO INSTEAD NOTHING rule) > _UPDATE > _NOTHING_UPDATE (unconditional DO INSTEAD NOTHING rule) > > These rules are marked 'implicit' in pg_rewrite, the rewriter is teached to > handle them different, depending on wether they are created with a rule > condition (a view's CHECK OPTION) or not. The CHECK OPTION itself is > implemented with a new system function and a conditional rule that > evaluates the view's WHERE condition (pg_view_update_error()). > > The supported syntax is > > CREATE VIEW foo AS SELECT ... [WITH [LOCAL | CASCADED] CHECK OPTION]; > > The LOCAL and CASCADED keywords are optional when a CHECK OPTION is > specified, the default is CASCADED (this syntax creates a shift/reduce > conflict in the grammar file i don't know how to fix). > > If a user wants his own rules with CREATE RULE to be created, the implicit > rule gets dropped, depending what action the user selects. > > The patch introduces support for pg_dump as well. > > Please note that the patch isn't complete yet, but it seems it's necessary > to discuss its implementation on -hackers now. > > Bernd [ Attachment, skipping... ] > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Preventing DELETE and UPDATE without a WHERE clause?
> Chris Campbell <[EMAIL PROTECTED]> writes: >> I heard an interesting feature request today: preventing the >> execution of a DELETE or UPDATE query that does not have a WHERE clause. > > These syntaxes are required by the SQL spec. Furthermore, it's easy > to imagine far-more-probable cases in which the system wouldn't detect > that you'd made a mistake, eg > > DELETE FROM tab WHERE key > 1 > > where you meant to type > > DELETE FROM tab WHERE key > 1000 > > I suggest counseling your client to learn how to use BEGIN/ROLLBACK. > This proposal strikes me as falling squarely within the rule about > "design a system that even a fool can use, and only a fool will want > to use it". > Just a theory, couldn't a trigger be set up that would case the query to tank if it touches too many rows? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Test request for Stats collector performance improvement
Bruce Momjian wrote: > Stefan Kaltenbrunner wrote: >> Bruce Momjian wrote: >>> OK, based on reports I have seen, generally stats_query_string adds 50% >>> to the total runtime of a "SELECT 1" query, and the patch reduces the >>> overhead to 25%. >> that is actually not true for both of the platforms(a slow OpenBSD >> 3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show >> virtually no improvement with the patch and even worst it causes >> considerable (negative) variance on at least the Linux box. > > I see the results I suggested on OpenBSD that you reported. > >> OpenBSD 3.9-current/x86: >> >> without stats: >> 0m6.79s real 0m1.56s user 0m1.12s system >> >> -HEAD + stats: >> 0m10.44s real 0m2.26s user 0m1.22s system >> >> -HEAD + stats + patch: >> 0m10.68s real 0m2.16s user 0m1.36s system yep those are very stable even over a large number of runs > > and I got similar results reported from a Debian: > > Linux 2.6.16 on a single processor HT 2.8Ghz Pentium compiled > with gcc 4.0.4. > > > > real0m3.306s > > > real0m4.905s > > > real0m4.448s > > I am unclear on the cuase for the widely varying results you saw in > Debian. > I can reproduce the widely varying results on a number of x86 and x86_64 based Linux boxes here (Debian,Fedora and CentOS) though I cannot reproduce it on a Fedora core 5/ppc box. All the x86 boxes are SMP - while the ppc one is not - that might have some influence on the results. Stefan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL community
Folks, > I am thrill to inform you all that Sun has just donated a fully loaded > T2000 system to the PostgreSQL community, and it's being setup by Corey > Shields at OSL (osuosl.org) and should be online probably early next > week. The system has So this system will be hosted by Open Source Lab in Oregon. It's going to be "donated" to Software In the Public Interest, who will own for the PostgreSQL fund. We'll want to figure out a scheduling system to schedule performance and compatibility testing on this machine; I'm not sure exactly how that will work. Suggestions welcome. As a warning, Gavin Sherry and I have a bunch of pending tests already to run. First thing as soon as I have a login, of course, is to set up a Buildfarm instance. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Ranges for well-ordered types
On Wed, Jun 14, 2006 at 15:47:16 +0900, Michael Glaesemann <[EMAIL PROTECTED]> wrote: > > On Jun 13, 2006, at 13:25 , Bruno Wolff III wrote: > > >Date ranges are really closed open as well (as finite sets of > >isolated points > >are both open and closed). The only oddity would be that the date > >used to > >indicate the open end of the range might not be what the user expects. > > I think it's definitely a matter of interpretation. [2006-01-01, > 2006-12-31] and [2006-01-01, 2007-01-01) both include the same days. > Who's to say which is the "real" representation? For all practical They are both real. In part my point was the reason the closed, closed form works well for overlap checking is because the sets are also closed, open and behave like that as well. (Though the user visible names are different.) > purposes (i.e., what can be represented within the database) > [2006-01-01 00:00:00+0, 2006-12-31 23:59:59] and [2006-01-01 > 00:00:00, 2007-01-01 00:00:00+0] represent the same timestamp(0) with > time zone ranges as well. While one might idealize time to be > continuous, as far as I know there isn't a way to represent time that > way in a computer, at the very least, not in PostgreSQL. Which is a good reason to used the Closed, Open definition. Then you don't have to work about whether postgres has been built with integer timestamps or the details of the floating point hardware your database is running on. > And for the very reason that it might not be what the user expects, > if there's a way to convert between closed-open and closed-closed as > appropriate, I think it makes it much more use friendly to do so. For > example, the closed-closed representation is equivalent to what > BETWEEN does. It would be very nice to be able to provide sometime > equivalent with ranges. I don't think it is unreasonable to have a different external representation for date ranges and timestamp ranges. It isn't conistant, but I think it is more readily understandable. Internally, it will probably be better to use closed, open for both to keep the code consistant to allow for reuse and better understandibility at that level. > As for the successor function itself: Any "exact" datatype, such as > timestamp (at least with --enable-integer-datetimes), date, integer, > or numeric, has some built-in precision anyway and a successor > function follows quite directly from that precision. I don't see that > as problematic or even very difficult. The successor function for timestamp when not using integer datetimes is going to depend on the underlying hardware. I think that will be a problem unless you are planning to force the use of integer datetimes. I don't think the project is ready for that yet, though in the long run I think it is a better way to go. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Test request for Stats collector performance improvement
Stefan Kaltenbrunner wrote: > Bruce Momjian wrote: > > OK, based on reports I have seen, generally stats_query_string adds 50% > > to the total runtime of a "SELECT 1" query, and the patch reduces the > > overhead to 25%. > > that is actually not true for both of the platforms(a slow OpenBSD > 3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show > virtually no improvement with the patch and even worst it causes > considerable (negative) variance on at least the Linux box. I see the results I suggested on OpenBSD that you reported. > OpenBSD 3.9-current/x86: > > without stats: > 0m6.79s real 0m1.56s user 0m1.12s system > > -HEAD + stats: > 0m10.44s real 0m2.26s user 0m1.22s system > > -HEAD + stats + patch: > 0m10.68s real 0m2.16s user 0m1.36s system and I got similar results reported from a Debian: Linux 2.6.16 on a single processor HT 2.8Ghz Pentium compiled with gcc 4.0.4. > > real0m3.306s > > real0m4.905s > > real0m4.448s I am unclear on the cuase for the widely varying results you saw in Debian. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Test request for Stats collector performance improvement
Bruce Momjian wrote: > OK, based on reports I have seen, generally stats_query_string adds 50% > to the total runtime of a "SELECT 1" query, and the patch reduces the > overhead to 25%. that is actually not true for both of the platforms(a slow OpenBSD 3.9/x86 and a very fast Linux/x86_64) I tested on. Both of them show virtually no improvement with the patch and even worst it causes considerable (negative) variance on at least the Linux box. > > However, that 25% is still much too large. Consider that "SELECT 1" has > to travel from psql to the server, go through the > parser/optimizer/executor, and then return, it is clearly wrong that the > stats_query_string performance hit should be measurable. > > I am actually surprised that so few people in the community are > concerned about this. While we have lots of people studying large > queries, these small queries should also get attention from a > performance perspective. > > I have created a new test that also turns off writing of the stats file. > This will not pass regression tests, but it will show the stats write > overhead. will try to run those too in a few. Stefan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Test request for Stats collector performance improvement
OK, based on reports I have seen, generally stats_query_string adds 50% to the total runtime of a "SELECT 1" query, and the patch reduces the overhead to 25%. However, that 25% is still much too large. Consider that "SELECT 1" has to travel from psql to the server, go through the parser/optimizer/executor, and then return, it is clearly wrong that the stats_query_string performance hit should be measurable. I am actually surprised that so few people in the community are concerned about this. While we have lots of people studying large queries, these small queries should also get attention from a performance perspective. I have created a new test that also turns off writing of the stats file. This will not pass regression tests, but it will show the stats write overhead. Updated test to be run: --- 1) Run this script and record the time reported: ftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.script It should take only a few seconds. 2) Modify postgresql.conf: stats_command_string = on and reload the server. Do "SELECT * FROM pg_stat_activity;" to verify the command string is enabled. You should see your query in the "current query" column. 3) Rerun the stat.script again and record the time. 4) Apply this patch to CVS HEAD: ftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.nobuffer 5) Run the stat.script again and record the time. 6) Revert the patch and apply this patch to CVS HEAD: ftp://candle.pha.pa.us/pub/postgresql/mypatches/stat.nobuffer_nowrite 7) Run the stat.script again and record the time. 8) Report the four results and your platform via email to [EMAIL PROTECTED] Label times: stats_command_string = off stats_command_string = on stat.nobuffer patch stat.nobuffer_nowrite patch --- Qingqing Zhou wrote: > > "Bruce Momjian" wrote > > > > Any idea why there is such a variance in the result? The second run > > looks quite slow. > > > > No luck so far. It is quite repeatble in my machine -- runing times which > show a long execution time: 2, 11, 14, 21 ... But when I do strace, the > weiredness disappered totally. Have we seen any strange things like this > before? > > Regards, > Qingqing > > > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Sun Donated a Sun Fire T2000 to the PostgreSQL community
I am thrill to inform you all that Sun has just donated a fully loaded T2000 system to the PostgreSQL community, and it's being setup by Corey Shields at OSL (osuosl.org) and should be online probably early next week. The system has * 8 cores, 4 hw threads/core @ 1.2 GHz. Solaris sees the system as having 32 virtual CPUs, and each can be enabled or disabled individually * 32 GB of DDR2 SDRAM memory * 2 @ 73GB internal SAS drives (1 RPM) * 4 Gigabit ethernet ports For complete spec, visit http://www.sun.com/servers/coolthreads/t2000/specifications.jsp I think this system is well suited for PG scalability testing, among others. We did an informal test using an internal OLTP benchmark and noticed that PG can scale to around 8 CPUs. Would be really cool if all 32 virtual CPUs can be utilized!!! Anyways, if you need to access the system for testing purposes, please contact Josh Berkus. Regards, Robert Lor Sun Microsystems, Inc. 01-510-574-7189 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] ident auth vs. encrypting ident daemons
Tom Lane wrote: * I'm inclined to make the Red Hat RPMs default to ident on socket and md5 on localhost ... any comments about that? I typically use something like this on RH platforms and friends for the system db cluster: local all postgres ident sameuser local all all md5 host all all 127.0.0.1/32 md5 Not using ident on tcp connections would be a good thing. I think an extra initdb switch is probably warranted. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Re-thing PG_MODULE_MAGIC
Martijn van Oosterhout writes: > However, thinking about it, what they're doing can't possibly work. To > override malloc/free, you need to load the library *before* the C > library. Having the postmaster do it after startup is way too late. Yeah, I was wondering about that too. At the very least you'd expect some odd behaviors with memory malloc'd before the load vs memory malloc'd after. Much more likely, though, it just fails to do anything at all. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Re-thing PG_MODULE_MAGIC
On Fri, Jun 16, 2006 at 02:51:41PM +0100, Simon Riggs wrote: > On Wed, 2006-06-14 at 20:07 -0400, Tom Lane wrote: > > Josh Berkus writes: > > > I just noticed (the hard way) that in 8.2CVS, the PG_MODULE_MAGIC header > > > is > > > now *required* for all loadable modules. This includes non-pg modules, > > > such as Solaris' libumem (performance improvement for malloc). > > > > What is libumem and why are you trying to load it as a dynamic module? > > http://www.usenix.org/event/usenix01/full_papers/bonwick/bonwick_html/index.html So it's a library that replaces malloc() and free() with new versions. Does it actually help postgres, given postgres has its own memory allocator already. In any case, it would be fairly straightforward to make a duummy module to wrap libumem. Create a file with just the module magic and link it against libumem. However, thinking about it, what they're doing can't possibly work. To override malloc/free, you need to load the library *before* the C library. Having the postmaster do it after startup is way too late. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [HACKERS] Re-thing PG_MODULE_MAGIC
On Wed, 2006-06-14 at 20:07 -0400, Tom Lane wrote: > Josh Berkus writes: > > I just noticed (the hard way) that in 8.2CVS, the PG_MODULE_MAGIC header is > > now *required* for all loadable modules. This includes non-pg modules, > > such as Solaris' libumem (performance improvement for malloc). > > What is libumem and why are you trying to load it as a dynamic module? http://www.usenix.org/event/usenix01/full_papers/bonwick/bonwick_html/index.html -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Re-thing PG_MODULE_MAGIC
Martijn van Oosterhout wrote: -- Start of PGP signed section. > On Fri, Jun 16, 2006 at 12:01:53AM -0400, Bruce Momjian wrote: > > OK, new text: > > > > The magic block required in all PostgreSQL-supported libraries > > is checked to guarantee compatibility. For this reason, non-PostgreSQL > > libraries cannot be loaded in this way. > > Heh. "guarantee"? We only check four things and it's possible to change > the system in all sorts of subtle ways (--integer-datetimes) that won't > be picked up. Ofcourse, for 99% of modules this is not a problem but > I'm not sure if the word "guarantee" is appropriate. > > Maybe just "checked for compatibility", or "determine" or "ascertain". OK, new text in new paragraph: Every PostgreSQL-supported library has a magic block that is checked to guarantee compatibility. For this reason, non-PostgreSQL libraries cannot be loaded in this way. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] postgresql and process titles
> > I did have dbt2 pretty close to functional on FreeBSD a year ago but > > it's probably gone back into linuxisms since then. > > :( > > I won't have the chance to work on this further for another 2 months, > but if you have patches I could see about picking up on them when I > get back. Everything has been applied to the dbt2 tree. -- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Postgres Crashes on Win2K
Hi The O/S is Win 2K Server SP4 and PG version is 8.0. Here's the log file from the latest incident: 2006-06-16 09:40:01 LOG: 0: server process (PID 4744) was terminated by signal 125 2006-06-16 09:40:01 LOCATION: LogChildExit, postmaster.c:2335 2006-06-16 09:40:01 LOG: 0: terminating any other active server processes 2006-06-16 09:40:01 LOCATION: HandleChildCrash, postmaster.c:2228 2006-06-16 09:40:02 WARNING: 57P02: terminating connection because of crash of another server process 2006-06-16 09:40:02 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2006-06-16 09:40:02 HINT: In a moment you should be able to reconnect to the database and repeat your command. 2006-06-16 09:40:02 LOCATION: quickdie, postgres.c:1890 2006-06-16 09:40:03 LOG: 0: all server processes terminated; reinitializing 2006-06-16 09:40:03 LOCATION: reaper, postmaster.c:2127 2006-06-16 09:40:04 LOG: 0: database system was interrupted at 2006-06-15 20:03:42 GMT Daylight Time 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4054 2006-06-16 09:40:04 LOG: 0: checkpoint record is at 0/3C5B5350 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4123 2006-06-16 09:40:04 LOG: 0: redo record is at 0/3C5B5350; undo record is at 0/0; shutdown FALSE 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4151 2006-06-16 09:40:04 LOG: 0: next transaction ID: 2991949; next OID: 182344 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4154 2006-06-16 09:40:04 LOG: 0: database system was not properly shut down; automatic recovery in progress 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4210 2006-06-16 09:40:04 LOG: 0: record with zero length at 0/3C5B5390 2006-06-16 09:40:04 LOCATION: ReadRecord, xlog.c:2487 2006-06-16 09:40:04 LOG: 0: redo is not required 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4312 2006-06-16 09:40:04 LOG: 0: database system is ready 2006-06-16 09:40:04 LOCATION: StartupXLOG, xlog.c:4517 Thanks Ch. Copy addresses and emails from any email account to Yahoo! Mail - quick, easy and free. Do it now...
Re: [HACKERS] Re-thing PG_MODULE_MAGIC
On Fri, Jun 16, 2006 at 12:01:53AM -0400, Bruce Momjian wrote: > OK, new text: > > The magic block required in all PostgreSQL-supported libraries > is checked to guarantee compatibility. For this reason, non-PostgreSQL > libraries cannot be loaded in this way. Heh. "guarantee"? We only check four things and it's possible to change the system in all sorts of subtle ways (--integer-datetimes) that won't be picked up. Ofcourse, for 99% of modules this is not a problem but I'm not sure if the word "guarantee" is appropriate. Maybe just "checked for compatibility", or "determine" or "ascertain". Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature