[GENERAL] transaction confusion
I have a function that deletes all the values in a table and then inserts the data again. If this function is run in 2 sessions at the same time then it populates it twice, giving me a unique value error, because one of the fields is supposed to be unique. I understand this because the second time it is run, the function does not see the delete as the first transaction is still running, so it deletes the same data the first one deleted and does not know that the new data exists when it starts to delete. I need a table level lock to prevent this from happening. I have a second function that takes 3 time as long to run and also includes a delete and insert statement as well as a lot of other stuff. The weird thing is if this function is run in two separate sessions at the same time the data is not duplicated. There is no lock and I don't have a unique index on this table and when the process is finished running there is the same number of records whether I run it once or twice. This is how I would like it to run, but I have to know why it works, otherwise I'll be worried that it will bite me in a couple months because something surfaces that I didn't look at before. I have tested it in PGAdmin. The first staements of the function are delete and insert. When I run just the delete and insert at the same time, then it puts in double the records. When I run the function at the same time then it only puts in the records once. Its as if the function is waiting for the first function to completely finish before starting. The function that duplicates (and causes the error) is just a straight delete and insert. The one that doesn't includes a for loop on a select with an update statement (on the same table it is selecting from) in the loop AFTER the delete and insert statements. Does this make any sense? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PostgreSQL installed in non-standard Directory
On Sat, Sep 16, 2006 at 03:53:04PM +0200, Michelle Konzack wrote: Since I have no Internet access @home anymore, I want to install my PostgreSQL in one of the patitions I have created. To do this, I want to compile it static, so that it does not need any system libraries. snip So, how can I compile PostgreSQL 8.1 static? Do you really want this? Compiling static is not straightforward, if all the libraries actually exist in static form anyway. Would it not be easier to install the system in a chroot, then you can use all the normal system libs and standard install packages... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] transaction confusion
On Sun, Sep 17, 2006 at 10:32:12AM +0200, Sim Zacks wrote: I have a function that deletes all the values in a table and then inserts the data again. If this function is run in 2 sessions at the same time then it populates it twice, giving me a unique value error, because one of the fields is supposed to be unique. snip Does this make any sense? Not overly much to me anyway. Could you provide some cut-and-paste output from a psql session so we can see the actual commands you're typing. It's possible the actual deletes and inserts are coded in a way to cause a problem... Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] transaction confusion
This function, when run in 2 separate sessions at the same time, causes a duplicate key error because popartid is a primary key. This makes sense to me. When the function is run in the second session it doesn't know about the transaction currently running in the first session. Therefore, when it does its delete, it obviously doesn't delete that data. When the first one finishes it puts the data into the table. When the second one finishes, it attempts to put its data into the table but finds records already there, which violate the primary key and cause an error. CREATE OR REPLACE FUNCTION populaterescheduleparts() RETURNS void AS $BODY$ begin delete from reschedulepoparts; insert into reschedulepoparts(popartid,priority,rescqty) SELECT a.popartid, a.priority, sum(b.rescqty) FROM reschedulepoparts_2 a JOIN reschedulepoparts_1 b ON a.popartid = b.popartid AND a.priority = b.priority group by a.popartid, a.priority; return; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; The next one is a bit longer. I would expect that if it was run in 2 separate sessions that 1) it would generate the same error as the first one generated, because it also populates the same table and 2) that there would be twice the number of records in the stat_allocated_components table. However, no matter how many tests I run, this function always exits with the correct number of rows in the table and without error. CREATE OR REPLACE FUNCTION populate_allocated_components() RETURNS void AS $BODY$ declare statrec stat_allocated_components; crs refcursor; statpos stat_allocated_components; v_partid int; v_lfbused int8; v_lfused int8; v_lused int8; v_polf int8; v_pol int8; v_polfb int8; v_availableLF int8; v_availableL int8; v_availableLFB int8; v_balancel int8; v_balancelf int8; leftovers int8; futurel int8; futurelf int8; futurelfb int8; beforeupdate int8; v_firstneedl date; v_firstneedlf date; sumqty int8; rowresc record; leadfound bool; tqty int8; Begin update systemsettings set lastranallocated = now(); delete from stat_allocated_components; delete from reschedulepoparts; insert into stat_allocated_components(partid,quantity,assembliesbatchid,assemblyname, popartid,duedate,stock,leadfree,l,lf,lfb,lbp,leadstateid) SELECT a.partid, a.quantity, c.assembliesbatchid, d.assemblyname,0, CASE WHEN (c.entrydate + '49 days'::interval) c.lastmodified THEN c.lastmodified ELSE c.entrydate + '49 days'::interval END::timestamp with time zone AS duedate, case when d.leadfree then coalesce(g.totallf,0) else coalesce(g.totallead,0) end AS stock, d.leadfree,coalesce(g.l,0),coalesce(g.lf,0),coalesce(g.lfb,0),coalesce(g.lbp,0), case when d.leadfree then 1 else 2 end FROM assemblies d JOIN assembliesbatch c ON d.assemblyid = c.assemblyid JOIN allocatedassemblies a ON a.assembliesbatchid = c.assembliesbatchid left Join stockperownerandleadstate g on g.partid=a.partid and g.ownerid=1 WHERE c.assembliesbatchstatusid in (1,2,4,7) and a.quantity0; insert into stat_allocated_components(partid,quantity,assembliesbatchid,assemblyname,popartid,duedate,l,lf,lfb,lbp,leadstateid,postatusid) SELECT a.partid, a.quantity, a.popartid AS assembliesbatchid, a.poref AS assemblyname, a.popartid AS popartid, a.expecteddate AS duedate, coalesce(g.l,0),coalesce(g.lf,0),coalesce(g.lfb,0),coalesce(g.lbp,0),a.leadstateid,postatusid FROM expectedpodelivery a Left join stockperownerandleadstate g on g.partid=a.partid and g.ownerid=1 ; v_partid=-1; for statrec in select * from stat_allocated_components order by partid,duedate,assembliesbatchid LOOP if v_partidstatrec.partid then v_partid=statrec.partid; v_lfbused:=0; v_lfused:=0; v_lused:=0; v_polf:=0; v_pol:=0; v_polfb:=0; v_balancel:=0; v_balancelf:=0; v_firstneedl:=null; v_firstneedlf:=null; tqty:=0; end if; if statrec.quantity0 then v_balancel:=statrec.stock -(v_lused+v_lfbused)+v_pol + v_polfb+case when not statrec.leadfree then statrec.quantity else 0 end; v_balancelf:=statrec.stock -(v_lfused+v_lfbused)+v_polf + v_polfb+case when statrec.leadfree then statrec.quantity else 0 end; if v_balancel0 and v_firstneedl is null then v_firstneedl:=statrec.duedate; end if; if v_balancelf0 and v_firstneedlf is null then
Re: [GENERAL] remote duplicate rows
You forgot to mention that all the functions/views that utilized that table also now point to the original table with the changed name, because it doesn't store the table name, it stores the table oid. Berend Tober wrote: A. Kretschmer wrote: am Wed, dem 13.09.2006, um 15:46:58 -0700 mailte Junkone folgendes: hI i have a bad situation that i did not have primary key. so i have a table like this colname1colname2 1 apple 1 apple 2 orange 2 orange It is a very large table. how do i remove the duplctes quickly annd without much change. begin; alter table foo rename to tmp; create table foo as select distinct * from tmp; commit; A couple potential problems here. First, you forgot to drop table tmp. But maybe that is good thing because although the OP hasn't told us anything else useful about the situation, and he has clearly contrived a simplistic facsimile of his real problem, to be useful the table most likely either has foreign key references, and/or is the primary key for other table foreign keys. You're suggestion will break whatever application this data base supports because all the foreign keys will point to table tmp rather than foo afterwards. Similarly, there is the problem of any indexes on the table that would be lost. But I suppose one can make the point that your suggestion is a great solution, given the contrived example and insufficient problem understanding presented by the OP -- I really think he needs more help than he realizes. Regards, Berend Tober ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] transaction confusion
Sim Zacks [EMAIL PROTECTED] writes: This function, when run in 2 separate sessions at the same time, causes a duplicate key error because popartid is a primary key. ... The next one is a bit longer. I would expect that if it was run in 2 separate sessions that 1) it would generate the same error as the first one generated, I would expect that too, assuming that it actually inserts the same set of rows into the table as the first one does (how sure are you of that?). I wonder whether you are testing them both under identical conditions. In particular, is the initial DELETE really doing anything or is the table usually empty to start with anyway? If there is something to delete then that ought to serialize the two sessions, leading to no error (because the second guy in will wait to see if the first guy commits his deletion). [ eyeballs second function some more... ] Actually, the second function is guaranteed to be serialized by that initial update systemsettings --- I assume that's a one-row table? The second guy in will be unable to get past that until the first guy commits, and then he'll see the first guy's updates and there will be no error. I think the reason you are seeing failures in the first function is that the initial DELETE is a no-op so it doesn't serialize anything, and then there is conflict when the two INSERTs proceed in parallel. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Windows Vista Starter
Has anybody had success installing PostgreSQL into Windows Vista (starter)? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] PostgreSQL slammed by PHP creator
I've never used a hammer to put in a screw. So I guess you're one of those ivory-tower theory-purist academic types, at least when it comes to home repairs. As a more practical person myself, let me just say that sometimes a 3lb hammer is exactly the right tool to get a screw set in quickly ;-) -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [HACKERS] One of our own begins a new life
On Fri, 2006-09-15 at 09:38 -0700, Joshua D. Drake wrote: Hello, Hi Joshua, Yeah, this is a cross post and it is slightly off topic but IMHO this is important. Tomorrow one of our own, Devrim Gunduz is becoming a man. He is sucking it up, and committing to the cvs repo of project marriage. May the patches reviewers be kind to him! Congratz Devrim, have a good honey moon and we look forward to having you back in a couple of weeks! I have done a quick post-commit review of his recent patch, it looked (very) good to me :) I'd also like to raise the point of how good he's on pretending to be a belly dancer. -- Enver signature.asc Description: This is a digitally signed message part
Re: [GENERAL] PostgreSQL slammed by PHP creator
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/17/06 16:34, Scott Ribe wrote: I've never used a hammer to put in a screw. So I guess you're one of those ivory-tower theory-purist academic types, at least when it comes to home repairs. As a more practical person myself, let me just say that sometimes a 3lb hammer is exactly the right tool to get a screw set in quickly ;-) Power, baby, power. http://www.blackanddecker.com//ProductImages/PC_Graphics/PHOTOS/BDK/POWER_TOOLS/TOOLS/LARGE/3/LI3000_1.jpg http://www.blackanddecker.com/ProductGuide/Product-Details.aspx?ProductID=10952 - -- Ron Johnson, Jr. Jefferson LA USA Is common sense really valid? For example, it is common sense to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that common sense is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFDcnJS9HxQb37XmcRAjztAKDokeQMkqdtLJuL3K2Q3/38pCcLdwCfbYen SswL3wO2L6nCJuH5dK92uTs= =G24q -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] One of our own begins a new life
On 9/15/06, Joshua D. Drake [EMAIL PROTECTED] wrote: Hello, Yeah, this is a cross post and it is slightly off topic but IMHO this is important. Tomorrow one of our own, Devrim Gunduz is becoming a man. He is sucking it up, and committing to the cvs repo of project marriage. May the patches reviewers be kind to him! Congratz Devrim, have a good honey moon and we look forward to having you back in a couple of weeks! The kind of cross post off topic post that's always welcome! :) t.n.a. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PostgreSQL slammed by PHP creator
Scott Ribe wrote: I've never used a hammer to put in a screw. So I guess you're one of those ivory-tower theory-purist academic types, at least when it comes to home repairs. As a more practical person myself, let me just say that sometimes a 3lb hammer is exactly the right tool to get a screw set in quickly ;-) I would love to argue with this, but I just can't. I have more then once used the following for a major repair: 1. Duct tape (admit it, you all have) 2. A paperclip (got my car started) 3. A hammer (It hurts the wrist less then a screwdriver) 4. Golf tees (used in place of screws to keep my dishwasher in place) 5. Tar sealant (can pretty much seal any leak) Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] PostgreSQL slammed by PHP creator
A case in point: When it comes to driving a screw into a material (like wood) in which the screw creates the mating helix itself when first driven in, you might find it helpful to hammer a small portion of the screw into the receiving part, so the screw gets an initial hold. It is much easier then to drive it fully in. Note that you do not need a heavy hammer and in fact, using a heavy hammer in such cases might even be counter-productive, since you would lose the sense of precision/accuracy required: you would want to hammer only as much of the screw into the wood as is needed for the screw to get an initial hold in the receiving material and no more -- if you want to take advantage of the greater holding power of a mating helix. P. Scott Ribe wrote: I've never used a hammer to put in a screw. So I guess you're one of those ivory-tower theory-purist academic types, at least when it comes to home repairs. As a more practical person myself, let me just say that sometimes a 3lb hammer is exactly the right tool to get a screw set in quickly ;-) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PostgreSQL slammed by PHP creator
As a comment on his points though, isn't it better to use the right tool for the job rather than try to force one tool to do everything? I've never used a hammer to put in a screw. You haven't? All right-minded people know that PostgreSQL is manifestly superior and that people who want to use MySQL are scurvy dogs. I fear you've jumped the gun, sir - Talk Like A Pirate Day isn't until tomorrow :-). Ron Johnson, Jr. Tim -- --- Tim Allen [EMAIL PROTECTED] Proximity Pty Ltd http://www.proximity.com.au/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL slammed by PHP creator
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/17/06 20:07, Tim Allen wrote: As a comment on his points though, isn't it better to use the right tool for the job rather than try to force one tool to do everything? I've never used a hammer to put in a screw. You haven't? All right-minded people know that PostgreSQL is manifestly superior and that people who want to use MySQL are scurvy dogs. I fear you've jumped the gun, sir - Talk Like A Pirate Day isn't until tomorrow :-). Practice, matey, practice. And a wee tankard o' grog. Or 10. - -- Ron Johnson, Jr. Jefferson LA USA Is common sense really valid? For example, it is common sense to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that common sense is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFFDfTlS9HxQb37XmcRAvjbAKDUCp7JwWj4rnlNMQf6YQHkKjt90ACffPqj eoG88AtiyOLVL+ak1I7o/YE= =9VD9 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [pgsql-advocacy] One of our own begins a new life
On 9/15/06, Joshua D. Drake [EMAIL PROTECTED] wrote: Tomorrow one of our own, Devrim Gunduz is becoming a man. He is sucking it up, and committing to the cvs repo of project marriage. Congratulations Devrim! -- Jonah H. Harris, Software Architect | phone: 732.331.1300 EnterpriseDB Corporation| fax: 732.331.1301 33 Wood Ave S, 2nd Floor| [EMAIL PROTECTED] Iselin, New Jersey 08830| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] FileMakerPro to postgreSQL
I have to update a database system and need some advice. Most of the database work I've done up to now has been databased backed websites, using perl and/or php to connect to mSQL, mySQL, or postgreSQL. One job that I inherited a few years ago was actually two separate databases, one an in-office FileMakerPro database client/server system and the other their website. They entered the data separately into each. At that time they didn't want to make any changes to the FileMakerPro system already in place, so I wrote a bridge between the two. Basically the FMP database FTPs the data overnight to the web server and a perl script checks and ingests the data into the website mySQL database. Over the years their business has grown, they've been adding to their FMP database system and I've been adding features to the website. Now I've got the opportunity to convert to a single database. (Of course, I favor postgreSQL or I wouldn't be writing this here!) I found FMPmigrator, a program that appears to do the conversion from FMP to any popular database (including postgreSQL) which appears quite promising. The largest table is only now approaching 100,000 rows; It's not a very big database, but it does contain jpgs. My biggest concern is the existing FileMakerPro clients. How much work would it be to connect the existing FMP clients to the postgreSQL database using ODBC? Is this a viable temporary step or could it be permanent? If they decide to go forward with this I'll talk to the FMPmigrator people, but I thought I'd just float this out on the list and see what other ways exist that I haven't thought of. Anybody know of a great book for me to learn more about database client/server architecture? Besides perl and php I also know C and basic, although I haven't used those in a while, and I've fooled around with java, so I won't totally rule out writing new client software. Also, I've read here that using access might be a good quick choice. But if I could use the existing FileMakerPro clients that might be fastest, eh? brew == Strange Brew ([EMAIL PROTECTED]) Check out my Stock Option Covered Call website http://www.callpix.com and my Musician's Online Database Exchange http://www.TheMode.com == ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] FileMakerPro to postgreSQL
[EMAIL PROTECTED] writes: ... My biggest concern is the existing FileMakerPro clients. How much work would it be to connect the existing FMP clients to the postgreSQL database using ODBC? Is this a viable temporary step or could it be permanent? If they can speak ODBC, then in theory you can easily connect them to PG that way. There's a fair-size gap between theory and practice, however --- for example, the clients might be depending on some nonstandard SQL syntax. The only way to really check this out is to test the clients against PG and see what comes up. See the pgsql-odbc list archives for some examples of gotchas. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [HACKERS] One of our own begins a new life
On Mon, Sep 18, 2006 at 12:29:56AM +0300, Enver ALTIN wrote: On Fri, 2006-09-15 at 09:38 -0700, Joshua D. Drake wrote: Hello, Hi Joshua, Yeah, this is a cross post and it is slightly off topic but IMHO this is important. Tomorrow one of our own, Devrim Gunduz is becoming a man. He is sucking it up, and committing to the cvs repo of project marriage. May the patches reviewers be kind to him! Congratz Devrim, have a good honey moon and we look forward to having you back in a couple of weeks! I have done a quick post-commit review of his recent patch, it looked (very) good to me :) I'd also like to raise the point of how good he's on pretending to be a belly dancer. Just be careful... this is one patch we don't want to be reviewing too closely. ;P Congrats Devrim! -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] FileMakerPro to postgreSQL
Yes filemaker can use ODBC but not as a live data source as you would use access as a frontend. You can use ODBC to import the data to a Filemaker database and display/edit it there and then export back through ODBC. If they use Filemaker to enter all the data (none is entered from the web site) then you can setup a script to export the data through ODBC to update the website. Either one record at a time after it is entered or as a batch one or more times a day. If information is entered into the website then you will also need a script to import the data from the website database. On 18/9/2006 11:58, Tom Lane [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] writes: ... My biggest concern is the existing FileMakerPro clients. How much work would it be to connect the existing FMP clients to the postgreSQL database using ODBC? Is this a viable temporary step or could it be permanent? If they can speak ODBC, then in theory you can easily connect them to PG that way. There's a fair-size gap between theory and practice, however --- for example, the clients might be depending on some nonstandard SQL syntax. The only way to really check this out is to test the clients against PG and see what comes up. See the pgsql-odbc list archives for some examples of gotchas. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- Shane Ambler [EMAIL PROTECTED] Get Sheeky @ http://Sheeky.Biz ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] What is the Best Postgresql Load Balancing Solution available ?
Hi, i was searching for a load balancing solution for postgres, I found some ready to use software like PGCluster, Slony, pgpool and others. It would really be nice if someone knows which one is the best taking in consideration that i have an already running application that i need to load balance. I would also like to know if someone has used Red Hat Cluster and GFS for implementing postgresql Load balancing solution. I have alreay used that for implementing a mail load balancing solution with 2 sendmail servers running on 2 different machines but sharing the same storage. Thanks for any help. Najib Abi Fadel System Administrator Saint-Joseph University __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] CLUSTERing on Insert
As I'm waiting for a CLUSTER operation to finish, it occurs to me that in a lot of cases, the performance benefits to having one's data stored on disk in index order can outweigh the overhead involved in inserting data on-disk in index order Just an idea I thought I'd throw out. :) Also, the CLUSTER operation is about as straight forward as one can get. It basically reads each row, one-by-one, in the index order over to the new table, reindexes, then renames the new table to preserve references. I've been thinking about how to speed up the copy process. Perhaps taking contiguous blocks of data and moving them into place would save some I/O time. Locking the table is another problem. Would it be impossible to perform the CLUSTER within the context of a READ COMMITTED transaction, and then pick up the leftover CRUD rows and put them at the end of the file. The existing code makes some assumptions that the table was not altered. There would be no more assumptions. I'm sure I'm not the first person to scratch his head thinking about CLUSTER. Maybe I just don't really understand the limitations that are out there preventing these things from being created. But, what else is there to do at 1AM on a Sunday night waiting for a 500MB table to CLUSTER? :) CG ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match