Re: [GENERAL] need for in-place upgrades
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Ron Johnson) would write: On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote: On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote: So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Nope. You also need it for the time when your vendor sells controllers or chips or whatever with known flaws, and you end up having hardware that falls over 8 or 9 times in a row. This of course never happens in real life; expensive hardware is _always_ UTTERLY reliable. And the hardware vendors all have the same high standards as, well, certain database vendors we might think of. After all, Oracle and MySQL AB would surely never mislead their customers about the merits of their database products any more than HP, Sun, or IBM would about the possibility of their hardware having tiny flaws. And I would never mislead anyone, either. I'm sure I got a full 8 hours sleep last night. I'm sure of it... -- cbbrowne,@,cbbrowne.com http://www3.sympatico.ca/cbbrowne/finances.html XML combines all the inefficiency of text-based formats with most of the unreadability of binary formats :-) -- Oren Tirosh ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Triggers/perform oddity between 7.2 and 7.3
Hi gurus, Anyone knows how to solve this? I was playing around with triggers on my own machine which runs 7.3 and there it works okey, but when trying it on 7.2 I get an error about duplicate keys. Upgrading to 7.3 on the other machines is not possible yet. Any help on getting it to work with 7.2 would be appreciated. snip create table a( n varchar, x int, y int, z int ); create table b( n varchar, x_sum int, y_sum int, z_sum int, primary key(n) ); create function plpgsql_call_handler() returns language_handler as '/usr/lib/postgresql/lib/plpgsql' language c; create language plpgsql handler plpgsql_call_handler; create function a_trigger() returns trigger as ' declare n_rows integer; begin if (tg_op = ''INSERT'') then perform n from b where n = new.n; -- Problem is here if not found then -- I think.. insert into b ( n, x_sum, y_sum, z_sum ) values ( new.n, new.x, new.y, new.z ); else update b set x_sum = x_sum + new.x, y_sum = y_sum + new.y, z_sum = z_sum + new.z where n = new.n; end if; return new; end if; if (tg_op = ''DELETE'') then perform n from a where n = old.n; get diagnostics n_rows = ROW_COUNT; if n_rows = 1 then delete from b where n = old.n; else update b set x_sum = x_sum - old.x, y_sum = y_sum - old.y, z_sum = z_sum - old.z where n = old.n; end if; return old; end if; if (tg_op = ''UPDATE'') then update b set x_sum = x_sum - old.x + new.x, y_sum = y_sum - old.y + new.y, z_sum = z_sum - old.z + new.z where n = new.n; return new; end if; end ' language 'plpgsql'; create trigger a_trigger before insert or update or delete on a for each row execute procedure a_trigger(); /snap In 7.2.1 I get error: trigger_test=# insert into a values('a', 1, 2, 3); INSERT 49555 1 trigger_test=# insert into a values('a', 1, 2, 3); NOTICE: Error occurred while executing PL/pgSQL function a_trigger NOTICE: line 6 at SQL statement ERROR: Cannot insert a duplicate key into unique index b_pkey trigger_test=# select version(); version --- PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4 (1 row) In 7.3.3 it works fine: trigger_test=# insert into a values('a', 1, 2, 3); INSERT 456365 1 trigger_test=# insert into a values('a', 1, 2, 3); INSERT 456366 1 trigger_test=# select version(); version --- PostgreSQL 7.3.3 on i386-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3 (Debian) (1 row) Thanks. -- Riddler Headcrusher - Plasma Fire ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Serialization, Locking...implement processing Queue with a
I searched for a long time for a response on how correctly use the database lock feature to create a job distribution system. Also I hate programs that pools databases every few seconds. Than I noticed that postgresql have a notify feature that inform clients when a table change. Beautiful!! why not to use it? It also works with db server behind firewalls because connection is initiated from the client. I think that the correct method to use is described by tom lane in this mail http://archives.postgresql.org/pgsql-general/2003-05/msg00351.php First of this mail I used this method: http://groups.google.com/groups?hl=enlr=ie=UTF-8threadm=002b01c2d6ca%24ac ceff60%240100a8c0%40kodunet.eernum=3prev=/groups%3Fhl%3Den%26lr%3D%26ie%3D ISO-8859-1%26q%3D%2522select%2Bfor%2Bupdate%2522%2B%2B%252B%2522limit%2B1%25 22%26meta%3Dgroup%253Dcomp.databases.postgresql.* Every concurrent process listen for a notify on a working table. A new job arrives on a writer table, and is copied by a trigger on a working table as described by tom lane and others. Also clients are informed of table change with a notify, so you have not to poll database. At this point every clients make a select of a new record, and simply try to update this record. I notice that after an update I can query if (atoi(PQcmdTuples(res)) !=0) And see if it returns a positive value I can process the record. Now what is the difference between this method My poor method 1 ) select id from readertable where owner = 0 limit 1 2) update readertable set owner = $me where (id = $id and owner=0); 3) if (atoi(PQcmdTuples(res)) !=0)then process record and method described by tom lane ? /// -- Tom's Lane method 1. When idle, you try to reserve a job like so: BEGIN; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT job_id, ... FROM job_table WHERE processor_id = 0 LIMIT 1 FOR UPDATE; The SELECT has three possible outcomes: 2: One row is returned. You do UPDATE job_table SET processor_id = $me WHERE job_id = $jobid; COMMIT; and then go about executing the job. When done, delete the row from job_table and try to get another one. 2b: No row is returned: no jobs are pending. Commit your transaction, sleep for an appropriate delay period, and try again. 2c: You get a can't serialize failure . --- Multiple concurrent updates on a table can generate a table lock so that clients must wait for a long time ? Making only 1 update is the correct solution , but who can explain this ? And what exactly do select for update? Suppose now that many jobs are sumbitted in the same time. Now with the second method different jobs are selected by different clients with only 1 select ? Why postgresql does not support update table LIMIT 1 ? ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] need for in-place upgrades
[EMAIL PROTECTED] (Marc G. Fournier) writes: On Thu, 18 Sep 2003, Andrew Sullivan wrote: On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote: I thought we were talking about upgrades here? You do upgrades without being able to roll back? Hadn't thought of it that way ... but, what would prompt someone to upgrade, then use something like erserver to roll back? All I can think of is that the upgrade caused alot of problems with the application itself, but in a case like that, would you have the time to be able to 're-replicate' back to the old version? Suppose we have two dbs: db_a - Old version db_b - New version Start by replicating db_a to db_b. The approach would presumably be that at the time of the upgrade, you shut off the applications hitting db_a (injecting changes into the source), and let the final set of changes flow thru to db_b. That brings db_a and db_b to having the same set of data. Then reverse the flow, so that db_b becomes master, flowing changes to db_a. Restart the applications, configuring them to hit db_b. db_a should then be just a little bit behind db_b, and be a recovery plan in case the new version played out badly. That's surely not what you'd _expect_; the point of the exercise was for the upgrade to be an improvement. But if something Truly Evil happened, you might have to. And when people are talking about risk management, and ask what you do if Evil Occurs, this is the way the answer works. It ought to be pretty cheap, performance-wise, to do things this way, certainly not _more_ expensive than the replication was to keep db_b up to date. -- (reverse (concatenate 'string gro.mca @ enworbbc)) http://www.ntlug.org/~cbbrowne/oses.html Rules of the Evil Overlord #149. Ropes supporting various fixtures will not be tied next to open windows or staircases, and chandeliers will be hung way at the top of the ceiling. http://www.eviloverlord.com/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] add constraints to views
Andreas Fromm [EMAIL PROTECTED] writes: No, I mean that the view behaves like a table with the same columns as table, but that restricts to records on wich tag is set. To insert a record to this special table it requires to have field1 set. In other words: A record of table is a record of view if tag is set. If tag is set, then field1 has also have to have a value. If tag is not set, it may have a value for field1, but will not show up in the view. Of course I could achive this be triggers, but I thought it could be possible to do via constraints on the view. Constraints on a view are meaningless --- it has no real rows to constrain. Put the constraints on the underlying table. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
Joshua D. Drake [EMAIL PROTECTED] writes: The reality of pg_dump is not a good one. It is buggy and not very reliable. I think everyone acknowledges that we have more work to do on pg_dump. But we have to do that work anyway. Spreading ourselves thinner by creating a whole new batch of code for in-place upgrade isn't going to improve the situation. The thing I like about the pg_upgrade approach is that it leverages a lot of code we already have and will need to continue to maintain in any case. Also, to be blunt: if pg_dump still has problems after all the years we've put into it, what makes you think that in-place upgrade will magically work reliably? This I am hoping changes in 7.4 as we moved to a pure c implementation. Eh? AFAIR, pg_dump has always been in C. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] This mail list and its policies
[EMAIL PROTECTED] writes: Also, I DO run spamassassin here, and it is finding only perhaps 10-20% of the copies of the most recent worm. I think it sends out copies that are sufficiently different from each other that it bypasses all the checks, including a Bayesian filter. Hmm. I've had no trouble filtering the actual worm (I filter using a pattern that looks for the first few bytes of a base64-encoded Windows executable file). The only copies that were getting as far as my spam inbox were ones that had had the executable file removed by various helpful filtering programs. I hope there is a special corner of hell reserved for spammers and an even worse corner reserved for virus/worm writers. And antivirus writers whose work increases the noise level instead of reducing it. They should know better than to bounce back complaint messages to the From: line when they have recognized a worm that is known to forge From:. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] 7.3.2 client connecting to 7.2.1 server problem
Jeff [EMAIL PROTECTED] writes: I have the following packages installed on my system: libpq3: 7.3.2r1-5 python-pygresql: 7.3.2r1-5 postgresql: 7.2.1-2woody2 postgresql-client: 7.2.1-2woody2 In my application, which is written in python, I get a server closed the connection unexpectedly exception after executing a few sql commands. Hmm. What do you see in the postmaster log when this happens? BTW, you *really* should update to 7.2.4. There are nasty bugs in 7.2.1. There's not enough info here to guess whether you are hitting a known bug, though. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] This mail list and its policies
On Sat, Sep 20, 2003 at 11:46:11 -0500, [EMAIL PROTECTED] wrote: I hope there is a special corner of hell reserved for spammers and an even worse corner reserved for virus/worm writers. Don't you mean for people who use Lookout to read eamil? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Triggers/perform oddity between 7.2 and 7.3
Andreas Forsgren [EMAIL PROTECTED] writes: Anyone knows how to solve this? I was playing around with triggers on my own machine which runs 7.3 and there it works okey, but when trying it on 7.2 I get an error about duplicate keys. [checks code] In 7.2, PERFORM didn't set FOUND; in 7.3 it does. To make your code work with 7.2, the easiest thing would be to change the PERFORM into a SELECT INTO. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] This mail list and its policies
On Thursday, Sept 18 Bruno Wolff said: One option for you is to use the list address in the from header when posting to the list. That will hide your address and not break replies. Most likely the list checks the envelope sender address to see whether or not the message needs moderator approval. So you should be able to have your messages go through right away if you keep the envelope sender address the same as your subscription address. Do you know of any eMail client that can be configured to do such a thing? If so, I'm instantly in love ... Or even better, an SMTP daemon that can be configured to do it in rewriting rules. I used to use Sendmail, but am now using Postfix. Suggestions welcome! -- Dean Gibson ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] 7.3.2 client connecting to 7.2.1 server problem
On Wed, 2003-09-17 at 21:55, Jeff wrote: I have the following packages installed on my system: libpq3: 7.3.2r1-5 python-pygresql: 7.3.2r1-5 postgresql: 7.2.1-2woody2 postgresql-client: 7.2.1-2woody2 [This is clearly a Debian system.] You seem to have done a partial upgrade to testing; the effect will be that you have two versions of the libpq library, and python-pygresql will not be using the same one as pgsql. It looks as though I made the dependencies too loose in that testing release. In my application, which is written in python, I get a server closed the connection unexpectedly exception after executing a few sql commands. I print each query before I execute it (debugging) so I tried executing the queries through psql with no problems. Is there a known incompatibility between the versions of client/server that I'm using? Or a known bug in pygresql? I suggest you upgrade to the version from unstable (7.3.4-5). Testing is held up and isn't getting timely upgrades; I would not, myself, consider it as good as unstable until we get very close to a new Debian release. Unstable has versioned python packages. If you want woody versions of the postgresql packages, you can get them with this apt line: deb http://people.debian.org/~elphick/debian main -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight, UK http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C But my God shall supply all your need according to his riches in glory by Christ Jesus. Philippians 4:19 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Question about connection java and Postgresql in Linux
On 20/09/2003 15:23 DN wrote: I am trying to use Redhat Linux 9, J2sdk 1.4.2 and Postgresql 7.3.2 to do my setting. But, I cannot access the database to set/get data through the java program. Please help give me some suggestions. Here is what my setting and problems: http://hk.geocities.com/daisy_nhy/java_postgresql2.html Thanks The postgreSQL .jar file must go in common/lib _not_ share/lib. Two other things: 1) have you enabled tcpip on the postmaster by seting tcpip_socket = true in /var/lib/pgsql/data/postgresql.conf 2) have you edited /var/lib/pgsql/data/pg_hba.conf to un-comment the line # host all all 127.0.0.1 255.255.255.255 trust because if you haven't done both of these, you won't be able to connect from Java. And before anybody jumps in, I'll just point out that the host line is far too permissive to be used in production but if you're just starting out experimenting with PostgreSQL and Java, it's one less thing to have to worry about for now. HTH -- Paul Thomas +--+-+ | Thomas Micro Systems Limited | Software Solutions for the Smaller Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+-+ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] This mail list and its policies
On Sat, Sep 20, 2003 at 12:53:12 -0700, Dean Gibson (DB Administrator) [EMAIL PROTECTED] wrote: On Thursday, Sept 18 Bruno Wolff said: One option for you is to use the list address in the from header when posting to the list. That will hide your address and not break replies. Most likely the list checks the envelope sender address to see whether or not the message needs moderator approval. So you should be able to have your messages go through right away if you keep the envelope sender address the same as your subscription address. Do you know of any eMail client that can be configured to do such a thing? If so, I'm instantly in love ... You should be able to use send-hook in mutt to do this. You need to set up a default send-hook and one that checks for sending to each postgres list you are subscribed to. You can have it set the from address to the list for the various list cases and to your normal address the rest of the time. Probably the envelope sender address will come from your login name being appended to the host name without you doing anything more. Or even better, an SMTP daemon that can be configured to do it in rewriting rules. I used to use Sendmail, but am now using Postfix. Suggestions welcome! That really isn't the right place to do it. Sendmail needed to do that because it was written in an era where email commonly had to move between different email networks and the messages needed to be reformatted as they moved from network to network. For the vast majority of the cases today, that doesn't need to happen. That is why recent MTAs don't mess with headers very much. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] OT: HEADS-UP: viral storm out there
On Fri, 2003-09-19 at 17:16, Christopher Browne wrote: [EMAIL PROTECTED] (Nikola Milutinovic) writes: This is off topic and is a cross-post, so I'll be brief. There is a very nasty virus out there and I urge everybody to get their AV in order. The virus is known as: W32.Gibe-F or W32.Swen-A. CERT has a report on it (if the URL resolves :-(). http://www.cert.org/current/current_activity.html#swena I have been receiving _thousands_ of these today, and others are reporting similar. It's as bad for us not running W32 as it is for those that are... Except they can get infected. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA Causation does NOT equal correlation ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostgreSQL versus MySQL
On Thu, 2003-09-18 at 17:15, Christopher Browne wrote: [EMAIL PROTECTED] (Sean Chittenden) writes: [snip] A coworker made the snide remark that anyone that doesn't totally get it is likely to really wonder about the notion of roasting a dolphin over a spit. Roasting dolphins over the fire isn't exactly politically correct, and the only people that could get it as being funny are those that are in the advocacy group. When I saw the image, I had to check at mysql.com to ensure that the dolphin actually *is* the MySQL mascot. [snip] (Note, I say this all as one of the politically-incorrect people that find jokes involving furry creatures being blown up to be outrageously funny at some weird visceral level.) http://www.geekswithguns.com/ -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA You can either have software quality or you can have pointer arithmetic, but you cannot have both at the same time. Bertrand Meyer ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Catalog vs. user table format (was Re: [GENERAL] State of Beta 2)
On Sat, 2003-09-20 at 11:17, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: No, I'm not suggesting no catalog changes ... wait, I might be wording this wrong ... there are two changes that right now requires a dump/reload, changes to the catalogs and changes to the data structures, no? Or are these effectively inter-related? Oh, what you're saying is no changes in user table format. Yeah, we Whew, we're finally on the same page! So, some definitions we can agree on? catalog change: CREATE or ALTER a pg_* table. on-disk structure, a.k.a. user table format: the way that the tables/fields are actually stored on disk. So, a catalog change should *not* require a dump/restore, but an ODS/UTF change should. Agreed? -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA they love our milk and honey, but preach about another way of living Merle Haggard, The Fighting Side Of Me ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] This mail list and its policies
On Sat, 2003-09-20 at 11:04, Joshua D. Drake wrote: Hello, Just run IMAP. That way all the mail stay one the server. Your system will just grab the headers and you can delete as required. Also you could installed something like spamassassin on the server (if you ISP) will let you. Get BigISP to let me run an IMAP daemon on their servers? Gaetano Mendola wrote: Ron Johnson wrote: On Thu, 2003-09-18 at 23:25, Tom Lane wrote: expect [EMAIL PROTECTED] writes: [snip] Get a life (or at least a spam-blocker). Even with spam blockers, the spam/virus still must be downloaded from the server, and if the person is on dial-up, that can be *most* painful: In the 24 hour period from yesterday noon to today noon, I received 209 MS Update viruses, each of which is 153KB. With a dial-up is better anyway download only the headers and delete it if is spam. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA Basically, I got on the plane with a bomb. Basically, I tried to ignite it. Basically, yeah, I intended to damage the plane. RICHARD REID, tried to blow up American Airlines Flight 63 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: Catalog vs. user table format (was Re: [GENERAL] State of Beta
On Sat, 20 Sep 2003, Ron Johnson wrote: On Sat, 2003-09-20 at 11:17, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: No, I'm not suggesting no catalog changes ... wait, I might be wording this wrong ... there are two changes that right now requires a dump/reload, changes to the catalogs and changes to the data structures, no? Or are these effectively inter-related? Oh, what you're saying is no changes in user table format. Yeah, we Whew, we're finally on the same page! So, some definitions we can agree on? catalog change: CREATE or ALTER a pg_* table. on-disk structure, a.k.a. user table format: the way that the tables/fields are actually stored on disk. So, a catalog change should *not* require a dump/restore, but an ODS/UTF change should. As long as pg_update is updated/tested for this, yes, that is what the thought is ... but, that still requires someone(s) to step up and work on/maintain pg_upgrade for this to happen ... all we are agreeing to right now is implement a policy whereby maintaining pg_upgrade is *possible*, not one where maintaining pg_upgrade is *done* ... ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] need for in-place upgrades
On Fri, 2003-09-19 at 06:37, Christopher Browne wrote: [EMAIL PROTECTED] (Ron Johnson) wrote: On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote: On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote: So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Nope. You also need it for the time when your vendor sells controllers or chips or whatever with known flaws, and you end up having hardware that falls over 8 or 9 times in a row. This of course never happens in real life; expensive hardware is _always_ UTTERLY reliable. And the hardware vendors all have the same high standards as, well, certain database vendors we might think of. After all, Oracle and MySQL AB would surely never mislead their customers about the merits of their database products any more than HP, Sun, or IBM would about the possibility of their hardware having tiny flaws. Well, I use Rdb, so I wouldn't know about that! (But then, it's an Oracle product, and runs on HPaq h/w...) And I would /never/ claim to have lost sleep as a result of flakey hardware. Particularly not when it's a HA fibrechannel array. I'm /sure/ that has never happened to anyone. [The irony herre should be causing people to say ow!] Sure, I've seen expensive h/e flake out. It was the 8 or 9 times in a row that confused me. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA The difference between drunken sailors and Congressmen is that drunken sailors spend their own money. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] about the pstate node
=?gb2312?q?=B2=A9=D7X=20=B5=D4?= [EMAIL PROTECTED] writes: I am trying to analyse the source code in /backend/paser. And I am really puzzled about the node PaserState,especially the meaning of p_namespace and p_joinlist! p_namespace is a list of the FROM items (RangeTblRef's or JoinExpr's) whose fields can validly be referenced from the current point in the query. We alter it while scanning the ON clause of a JOIN, for example, because the ON clause is only allowed to refer to fields from the JOIN's input tables. p_joinlist has the same structure but is a list of all the FROM items that the planner must join to execute the query. Usually the final states of p_namespace and p_joinlist are the same, but there are special cases where items get entered into only one list or the other (mostly for NEW and OLD in rules, IIRC). regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: PG + PHP, was Re: [GENERAL] Zend survey result about dbms...
Tom Lane wrote: Richard Huxton [EMAIL PROTECTED] writes: One thing I think would be useful is another pseudo-var in PG, something like APP_SESSION_VAR which can be set and then used in PG queries. Tom - if I offered to produce a patch for something like this - either a var or a function pair (get_app_session_var(), set_app_session_var(varchar)) would it be likely to be accepted? It'd depend a lot on the details of what you propose, I think. True variables seem like they'd be rather invasive, not to mention possibly error-prone (is foo a variable or a column reference?). However you could do something pretty self-contained in the form of a couple of functions. I'd suggest they support more than just one variable, btw. How about set_session_variable(varname, varvalue) and get_session_variable(varname)? I should think we'd at least accept that as a contrib module; whether it would become mainstream would probably depend on the level of interest. In the past, one hack would be to use setenv() and getenv() of the backend to implement these functions. What about a contrib module that: 1) At set_session_variable(), a getenv() on the key (PG_APP_SESSION_VAR) is made to see if memory has already been allocated for a private variable map. If so, add the variable to the map, else allocate a new map and set it's address using setenv(). 2) At get_session_variable() a getenv() on the key (PG_APP_SESSION_VAR) is made to see if a map exists. If so, it looks into the map for the value of the name specified and returns it. 3) At get reset_session_variables() a getenv() on the key (PG_APP_SESSION_VAR) is made to see if a map exists. If so, it is emptied. This way, no change to any internal postgres code is required, the memory allocated to the session-specific variables gets released at backend closing, etc. Would something like that be acceptable? Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: PG + PHP, was Re: [GENERAL] Zend survey result about dbms...
Mike Mascari [EMAIL PROTECTED] writes: In the past, one hack would be to use setenv() and getenv() of the backend to implement these functions. What about a contrib module that: Uh, what exactly does it buy you to involve an environment variable in this process? I think it just adds fragility. (For example, exposing setenv to the user creates the risk that he'll overwrite something of importance, like PATH.) regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: PG + PHP, was Re: [GENERAL] Zend survey result about dbms...
Tom Lane wrote: Mike Mascari [EMAIL PROTECTED] writes: In the past, one hack would be to use setenv() and getenv() of the backend to implement these functions. What about a contrib module that: Uh, what exactly does it buy you to involve an environment variable in this process? I think it just adds fragility. (For example, exposing setenv to the user creates the risk that he'll overwrite something of importance, like PATH.) Actually, I meant that setenv() and getenv() would only be used to store the memory address of a privately manipulated variable map. I did not mean that it should actually be used to store and retrieve the variables themselves. If there is already a way to palloc() memory using a key that lives for the lifetime of a backend, then that's obviously the way to go. I was proceeding under the assumption that there wasn't. Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: PG + PHP, was Re: [GENERAL] Zend survey result about dbms...
Mike Mascari [EMAIL PROTECTED] writes: Actually, I meant that setenv() and getenv() would only be used to store the memory address of a privately manipulated variable map. I did not mean that it should actually be used to store and retrieve the variables themselves. If there is already a way to palloc() memory using a key that lives for the lifetime of a backend, then that's obviously the way to go. Plain old static pointer variable would do fine ... look for instance at the remote-connections table in contrib/dblink/dblink.c. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: PG + PHP, was Re: [GENERAL] Zend survey result about dbms...
Mike Mascari wrote: Tom Lane wrote: Uh, what exactly does it buy you to involve an environment variable in this process? I think it just adds fragility. (For example, exposing setenv to the user creates the risk that he'll overwrite something of importance, like PATH.) Actually, I meant that setenv() and getenv() would only be used to store the memory address of a privately manipulated variable map. I did not mean that it should actually be used to store and retrieve the variables themselves. If there is already a way to palloc() memory using a key that lives for the lifetime of a backend, then that's obviously the way to go. I was proceeding under the assumption that there wasn't. It's been a long time since I've used global variables. Sorry... Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Can't Build 7.3.4 on OS X
Hunter Hillegas [EMAIL PROTECTED] writes: I cannot build the latest release on OS X Jaguar. Running GCC 3.3 from Apple: It seems -traditional-cpp has become nontraditional in 3.3. Or possibly Apple changed their system header files in a way that broke that preprocessor. What's certain is that gcc 3.3 rejects some of the Darwin header files when using -traditional-cpp. This is not our fault ;-). It seems to work okay if you change -traditional-cpp to -no-cpp-precomp in src/template/darwin (you will need to rerun configure afterwards). Experimenting, I find that Apple's gcc 3.1 and 3.3 both build PG fine with that switch. Their latest update of gcc 2.95 no longer seems to work at all (it generates invalid assembler code for xlog.c). I tried removing the cpp option entirely, but that blew up in other places, and I have no interest in figuring out why just now. Is anyone on the list still running OS X 10.1, or anyway still using a version of the OS X developer tools older than the Dec 2002 release? It would be good to check if -no-cpp-precomp creates any problems on any release that anyone still cares about. For the moment, I've made src/template/darwin unconditionally use -no-cpp-precomp, but we could probably hack it to use -traditional-cpp if there's still any interest in Darwin compiler versions that don't recognize -no-cpp-precomp. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match