[SQL] transaction processing after error in statement
Dear PostgreSQL Gurus, I have been using PostgreSQL for quite a while and always relied on its handling of transaction and concurrency. But recently I discovered a behaviour that deviates quite a lot from how I understand transactions and how things are handled by other databases: HP Allbase, Oracle 8 and .mdb-files (Access). Here's the description: Whenever an error occurs within the transaction, PostgreSQL puts the whole transaction in an *ABORT* state, so that there is no difference at all between COMMITing or ROLLBACKing it. Even commands successfully carried out before the error ocurred are rolled back, even if I COMMIT the transaction, where no error message whatsoever is shown. Example: begin; insert into table1 values (1, 'hello'); --> success! select no from table1; ERROR: Attribute 'no' not found commit; --> success! Why should the insert statement fail, just because there was a typo in the following select statement? I was already carried out successfully, albeit only visible to the current transaction. I found this behaviour to be the same across all 7.x versions of PostgreSQL. Unfortunately, I haven't been able to find an explanation why PostgreSQL behaves like this and why all other RDBMS I tried behave differently. In this case the others make more sense to me. Additionally, I have discovered that phantom reads occur in PostgreSQL even if isolation mode serializable is used. Also not so nice! Sincerely, Holger -- Holger Jakobs * D-51469 Bergisch Gladbach Telefon +49-2202-59991 * Mobilfon +49-177-792-2466 ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] help me...
Hello everyone, Good day! Could anyone help me translate this query in Microsoft Access to Postgresql. I'm having a difficulty. Pls... Query1:SELECT items.description, Sum(supplieditems.qty) AS SumOfqtyFROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemnoGROUP BY items.description; Query2:SELECT [items].[description], Sum([customer].[qty]) AS SumOfqtyFROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER JOIN customer ON [OtherItem].[Itemno]=[customer].[itemcode]GROUP BY [items].[description]; Query3:SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remainFROM Query1, Query2; I have translated Query1 and Query2 in POSTGRESQL but I don't know how to implement Query3.Want to chat instantly with your online friends? Get the FREE Yahoo! Messenger
[SQL] Getting the row_count value outside of a function
I can get the row_count value inside of a function, but I can´t get it outside of a function. How can I do this? Regards, Enio ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] DateDiff in PostgreSQL
Hi, You can use the age() function to find the difference between dates, and use the extract() function to get the years, months, days, etc. Yasir On Fri, 7 Nov 2003, George A.J wrote: > Date: Fri, 7 Nov 2003 05:34:09 -0800 (PST) > From: George A.J <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Subject: [SQL] DateDiff in PostgreSQL > > > Hi, > > i am converting a MSSQL Server database to PostgreSQL. > > Using PostgreSQL version 7.3.2. > > Is there any function like the DateDiff() in MSSQL Server. > > ie, a function that returns difference of two dates(timestamp) in days or months or > year.. > > The - operator for timestamp retuns the intervel in days only. > > Please help > > jinujose > > > - > Do you Yahoo!? > Protect your identity with Yahoo! Mail AddressGuard ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] transaction management in plpgsql functions
It seems that transaction management statements (begin work...commit work) are not allowed into plpgsql functions. Is it true? If true, what happens if I put a DML statement into a function? Is it automatically commited every time the function executes? Is there no way to rollback the changes? Kind regards, -- Cris Carampa (spamto:[EMAIL PROTECTED]) "Poveri fanatici comunisti, noglobal e affetti dalla sindrome anti-microsoft" (gli utenti Linux secondo un poster di ICOD) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] SQL Help
PLS, help, I am trying to JOIN two tables together with the LEFT OUTER JOIN statement, but with a twist: 1. table 1 id and table 2 projectid are the join field 2. Table 1 has de detail info for several ocurrences in table 2. For each id in table 1 (project detail), there are many entries of projectid on table 2 (update log on project) 3. I need a query that gets each project detail (table 1) with ONLY the LAST (most recent) update date and comment (table 2) for that project. The result report will look something like this: Repeat PROJECT NAME(table 1) PROJECT DATE(table 1)PROJECT OWNER(table 1) LAST UPDATE(table 2)LAST UPDATE COMMENT(table 2) Repeat_ I have used the SELECT TOP 1 function, the DISTINCT function, I am out of ideas. Thanx LuisFe ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] [HACKERS] Schema boggle...
Hi Marc, I was actually leaning towards schema's as a method to partition the data out when I was beginning to plan AMS 2, your suggestions of schema's confirmed this as a good approach for what we were looking for so I started to code the initial layout for the Schema in April/May, but at the time I did not have the full picture in my head on dealing with some sections of the database To address Josh's comments though. Monolithic tables have their uses, I don't and wont contest that, however we've been using a monolithic structure in our current AMS distribution and in a lot of areas it has been quite a pain to try and bring some uniformity into the structure. First off, I have an ever changing platform and one of the issues I had was that development under the monolithic structure because problematic and very difficult to merge new elements in, granted the SQL schema was not all that great and a number of areas could have used improvement, but over all it was more trouble then it was worth. By breaking out our client content into a schema structure, I've simplified the management of my data immensely, I can easily delete old/unused data with out having to worry about updating scripts that have been written to clean out structures. I can give clients greater control over their own data with out worrying about some "security" issue that I might have missed in my programming, so there's and added level of safety in an already tight structure. I've "objectified" out the content into an easily changeable/"update friendly" system of tables, views and functions. I've reduced a fair amount of overhead by limiting the order of tables, For instance, with our original monolithic dataset, we have approx 90+ tables handling all of the content that we use on a regular basis. With this new structure I've reduced that down to a total of 30 tables, 8 of which are repeated across the schemas. The method we are using the schemas in also allows me to work in some assumptions that were tricky to code under the monolithic structure, with the schema system, simplicity of the table set allowed allowed me to simulate those assumptions with out having to re-write code each time I have an update, thus giving me a highly dynamic dataset. That and I'm more confident working on a 3D level, versus 2D, the schema levels introduce the third dimension into my structure and simplify visualizing how I want things to work. Within that third dimensional structure, it made sense to be able to code out an SQL statement that would have searched the schema set for matching patterns to that given in the SQL query, similar to the way one can search for a pattern on data in a column. But Tom has given me an idea that will allow me to work out how to search multiple schemas within a dynamic plpgsql function that figures out all my patterned schemas and executes the resulting query as he suggested, I just need to learn plpgsql programming so that I can work out all of the details (Tom, I might bug you about that once in a while). At 09:45 PM 11/5/03 -0400, Marc G. Fournier wrote: Actually, the use of schema's was my idea, to speed up some dreadfully slow queries dealing with traffic stats from a table that was growing painfully monolithic ... the idea is/was that it would be easier to backup/remove all data pertaining to a specific client if they decided to close their account ... On Wed, 5 Nov 2003, Josh Berkus wrote: > Chris, > > > I work with Marc Fournier, for those who don't know, and have been > > working extensively with the schemas feature for Hub.Org's new Account > > Management package. Each client's data is stored in a seperate schema > > set asside just for them (though they will never have direct access to > > it, it helps us "balance" out the data. Each schema has the same set of > > tables, with which different data is stored, depending on the client. > > While this is an interesting use of Schema, it is not, IMHO, a good way to > partition client data.Far better to have monolithic tables with a > "client_id" column and then construct *views* which are available in each > schema only to that client. Otherwise, as you've discovered, top-level > management becomes a royal pain. > > I'd be happy to discuss this further on PGSQL-SQL, which is really the > appropriate mailing list. > > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] transaction management in plpgsql functions
After takin a swig o' Arrakan spice grog, Cris Carampa <[EMAIL PROTECTED]> belched out...: > It seems that transaction management statements (begin work...commit > work) are not allowed into plpgsql functions. Is it true? If true, > what happens if I put a DML statement into a function? Is it > automatically commited every time the function executes? Is there no > way to rollback the changes? The "problem" with using BEGIN/COMMIT in plpgsql is fundamentally that those functions have to be _started_ in the context of a transaction, so by the time they get started, there is already a transaction in progress. If-and-when support for nested transactions gets into place, you would presumably be able to have nested transactions inside functions. What happens may be a little different from what you think; things are not COMMITted when the function executes, but rather when the COMMIT takes place /on the transaction in which the function runs/. Thus... BEGIN; INSERT INTO T1 (4, 5); INSERT INTO T2 (6, 7, NOW()); SELECT FUNNY_FUNCTION(4,5,6,7, NOW()); DELETE FROM T1; DELETE FROM T2; COMMIT; All of the changes commit as of the COMMIT statement at the end, and not before. If you had DML creating table T3 in FUNNY_FUNCTION, then T3 would not become visible to other users until the COMMIT, although the current transaction could readily add/modify records before the COMMIT. -- (format nil "[EMAIL PROTECTED]" "cbbrowne" "cbbrowne.com") http://cbbrowne.com/info/sap.html If you're sending someone some Styrofoam, what do you pack it in? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] help me...
Hi, I'm usually wrong, but you can create a view for Query1 and Query2. That is do create view Query1 [your query for query1]; create view Query2 [your query for query2]; Then what you are doing should work. Are views offered in Access? Regards, Yasir On Sat, 8 Nov 2003, [iso-8859-1] ron_tabada wrote: > Date: Sat, 8 Nov 2003 06:02:15 + (GMT) > From: "[iso-8859-1] ron_tabada" <[EMAIL PROTECTED]> > Reply-To: [EMAIL PROTECTED] > To: [EMAIL PROTECTED] > Subject: [SQL] help me... > > Hello everyone, Good day! Could anyone help me translate this query in Microsoft Access to Postgresql. I'm having a difficulty. Pls... > Query1: > SELECT items.description, Sum(supplieditems.qty) AS SumOfqty > FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno > GROUP BY items.description; > Query2: > SELECT [items].[description], Sum([customer].[qty]) AS SumOfqty > FROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER JOIN customer ON [OtherItem].[Itemno]=[customer].[itemcode] > GROUP BY [items].[description]; > Query3: > SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remain > FROM Query1, Query2; > I have translated Query1 and Query2 in POSTGRESQL but I don't know how to implement Query3. > > > - > Want to chat instantly with your online friends?Get the FREE Yahoo!Messenger ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] transaction management in plpgsql functions
http://www.postgresql.org/docs/7.2/interactive/plpgsql-structure.html "It is important not to confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the database commands for transaction control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query --- they cannot start or commit transactions, since PostgreSQL does not have nested transactions." Makes sense if you think about it. On Thu, 6 Nov 2003, Cris Carampa wrote: > It seems that transaction management statements (begin work...commit > work) are not allowed into plpgsql functions. Is it true? If true, what > happens if I put a DML statement into a function? Is it automatically > commited every time the function executes? Is there no way to rollback > the changes? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] transaction processing after error in statement
On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: > Whenever an error occurs within the transaction, PostgreSQL puts the > whole transaction in an *ABORT* state, so that there is no difference at > all between COMMITing or ROLLBACKing it. Even commands successfully > carried out before the error ocurred are rolled back, even if I COMMIT > the transaction, where no error message whatsoever is shown. In PostgreSQL all errors are currently considered unrecoverable, and all statements in a transaction must commit or rollback together as a single unit. In the future an implementation of nested transactions or savepoints would presumably relax this limitation to only the successfully committed subtransactions or statements that were not separately rolled back to a previous savepoint. > Additionally, I have discovered that phantom reads occur in PostgreSQL > even if isolation mode serializable is used. Also not so nice! You're going to have to give more information for anyone to even begin to understand the case you're seeing. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] help me...
The world rejoiced as [EMAIL PROTECTED] (ron_tabada) wrote: > Hello everyone, Good day! Could anyone help me translate this query > in Microsoft Access to Postgresql. I'm having a difficulty. Pls... > > Query1: > SELECT items.description, Sum(supplieditems.qty) AS SumOfqty > FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno > GROUP BY items.description; > > Query2: > SELECT [items].[description], Sum([customer].[qty]) AS SumOfqty > FROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER > JOIN customer ON > [OtherItem].[Itemno]=[customer].[itemcode] > GROUP BY [items].[description]; > > Query3: > SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], > [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remain > FROM Query1, Query2; > > I have translated Query1 and Query2 in POSTGRESQL but I don't know > how to implement Query3. Apparently you have discovered the nearest equivalent to "VIEWs" in Access. I can suggest two ways: 1. Define "query1" and "query2" as PostgreSQL views, as with... create view query1 as SELECT items.description, Sum(supplieditems.qty) AS SumOfqty FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno GROUP BY items.description; create view query2 as [omitted details]. Query 3 should work perfectly well when it has the two views to work with. 2. Subselects... SELECT Query1.SumOfqty, Query2.SumOfqty, Query1.SumOfqty-Query2.SumOfqty AS remain FROM (select stuff for query 1) as query1, (select stuff for query 2) as query2; Approach #1. seems more appropriate, as it uses the views to keep the queries all simple. -- let name="cbbrowne" and tld="acm.org" in String.concat "@" [name;tld];; http://www.ntlug.org/~cbbrowne/nonrdbms.html Incrementally extended heuristic algorithms tend inexorably toward the incomprehensible. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster