[SQL] SET AUTOCOMMIT TO OFF
Please, could someone point me to the right list or tell me how to do a "SET AUTOCOMMIT TO OFF" within libpq. The case is pretty urgent, as we have to move our applications to Linux and thought it's better to use PG7.4. We used to have PG7.3 under HPUX, and we think of a temporary downgrade back to 7.3 under Linux, if we cannot solve the SET AUTOCOMMIT TO OFF case. A simple "No, it's no longer available" or "Yes, it goes like ..." by someone from the core developers would be greatly appreciated. Following what I was asking the other day: Since the server-side autocommit off was abolished in 7.4 I am looking for a session-wide replacement, meaning as long as one connection is running, after every COMMIT the next SQL-command triggers an implicit BEGIN. The Release Notes on 7.4 state The server-side autocommit setting was removed and reimplemented in client applications and languages. Server-side autocommit was causing too many problems with languages and applications that wanted to control their own autocommit behavior, so autocommit was removed from the server and added to individual client APIs as appropriate. So basically, I cannot find the autocommit-off-switch within the libpq interface. TIA Regards, Christoph BTW In ./postgresql-7.4.5/doc/html/ecpg-commands.html there is still a line saying This mode can be explicitly turned off using EXEC SQL SET AUTOCOMMIT TO OFF. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] SET AUTOCOMMIT TO OFF
Christoph Haller <[EMAIL PROTECTED]> writes: > Please, could someone point me to the right list > or tell me how to do a "SET AUTOCOMMIT TO OFF" within libpq. libpq does not have any support for that. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] inserting values into types
On Wed, Dec 01, 2004 at 06:07:34PM +0800, Andrew Thorley wrote: > > Did you type the SQL statements and/or error messages instead > > of cutting and pasting? > > yes i C&P'ed the SQL code & error code. But did you copy the error message associated with the SQL code you copied, or did you copy some other error message? I ask because the function name in the error doesn't match the function name in the INSERT statement. Here's what I get when I execute the statements in your message: test=> CREATE TYPE qwerty_UDT AS (abc INT); CREATE TYPE test=> CREATE TABLE t (col1 qwerty_UDT); CREATE TABLE test=> INSERT INTO t (col1) Values (qwerty_UDT(123)); ERROR: function qwerty_udt(integer) does not exist You said the error you got was: ERROR: function test_x(integer) does not exist The error is the same but the detail differs: your error refers to test_x(integer) instead of qwerty_udt(integer). So where is test_x coming from? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Found Large Files.. what objects are they?
I found bunch of large files (more than 1 gb) in one of database directory. The files looks like this: 69233123 69233123.1 69233123.2 69233123.3 69233123.4 ...and so on. These large files very delay the dumping process. Anyone know what it could be & how to delete the object related? How to find a table by oid? Thanks Yudie
[SQL] Query is slower
Hi I have a query using "like" operator (select * from name like 'JOHN%'), but the table has about 500 hundred records. The has a index (create index ixcontract_name on contract (name)) , but it is very slow because it is not using index. How do I do the query to use index? Sandro -- *Sandro Joel Eller* Analista/Programador Senior www.tecsoft.com.br [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] inserting values into types
hi mike, sorry for confusion, the test_x is my mistake, its from another type i created which is executing the same code. my error i get from: CREATE TYPE qwerty_UDT AS (abc INT); CREATE TABLE t (col1 qwerty_UDT); INSERT INTO t (col1) VALUES (qwerty_UDT(123)); is: ERROR: function qwerty_udt(integer) does not exist HINT: No function matches the given name and argument types. You may need to add explicit type casts. - Original Message - From: "Michael Fuhr" <[EMAIL PROTECTED]> To: "Andrew Thorley" <[EMAIL PROTECTED]> Subject: Re: [SQL] inserting values into types Date: Wed, 1 Dec 2004 10:53:01 -0700 > > On Wed, Dec 01, 2004 at 06:07:34PM +0800, Andrew Thorley wrote: > > > > Did you type the SQL statements and/or error messages instead > > > of cutting and pasting? > > > > yes i C&P'ed the SQL code & error code. > > But did you copy the error message associated with the SQL code you > copied, or did you copy some other error message? I ask because > the function name in the error doesn't match the function name in > the INSERT statement. Here's what I get when I execute the statements > in your message: > > test=> CREATE TYPE qwerty_UDT AS (abc INT); > CREATE TYPE > test=> CREATE TABLE t (col1 qwerty_UDT); > CREATE TABLE > test=> INSERT INTO t (col1) Values (qwerty_UDT(123)); > ERROR: function qwerty_udt(integer) does not exist > > You said the error you got was: > > ERROR: function test_x(integer) does not exist > > The error is the same but the detail differs: your error refers > to test_x(integer) instead of qwerty_udt(integer). So where is > test_x coming from? > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ -- __ Check out the latest SMS services @ http://www.linuxmail.org This allows you to send and receive SMS through your mailbox. Powered by Outblaze ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] inserting values into types
"Andrew Thorley" <[EMAIL PROTECTED]> writes: > CREATE TYPE qwerty_UDT AS (abc INT); > CREATE TABLE t (col1 qwerty_UDT); > INSERT INTO t (col1) VALUES (qwerty_UDT(123)); > ERROR: function qwerty_udt(integer) does not exist Just say INSERT INTO t (col1) VALUES (ROW(123)); Note this will not work at all on pre-8.0 Postgres. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Query is slower
On Wed, 1 Dec 2004, Sandro Joel Eller wrote: > I have a query using "like" operator (select * from name like 'JOHN%'), > but the table has about 500 hundred records. The has a index (create > index ixcontract_name on contract (name)) , but it is very slow because > it is not using index. How do I do the query to use index? Well, there are two possibilities. The more likely one is that the database is not in "C" locale so it won't use the index described (it would need an index with a _pattern_ops operator class). If this is the case, the two options are making the pattern_ops index or re-initdb with "C" locale. The other, less likely, option is that it's expecting a large percentage of rows to match which you can check with EXPLAIN. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Found Large Files.. what objects are they?
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] ("Yudie") would write: > I found bunch of large files (more than 1 gb) in one of database directory. > > The files looks like this: > > > 69233123 > > 69233123.1 > > > 69233123.2 > > 69233123.3 > > 69233123.4 > > ...and so on. > > These large files very delay the dumping process. > > Anyone know what it could be & how to delete the object related? How to find > a table by oid? The phenomenon you are seeing occurs when there is a table with a great deal of data. Look for the table via the query: select * from pg_class where oid = 69233123; Presumably these files are for a table that contains multiple GB of data. Or they could be for an index on a very large table. If the data in the table/index is useless to you, you might consider dropping the table/index. -- let name="cbbrowne" and tld="gmail.com" in name ^ "@" ^ tld;; http://www3.sympatico.ca/cbbrowne/wp.html DO IT -- it's easier to get forgiveness than permission. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] inserting values into types
> CREATE TYPE qwerty_UDT AS (abc INT); > > CREATE TABLE t (col1 qwerty_UDT); > > INSERT INTO t (col1) VALUES (qwerty_UDT(123)); > > ERROR: function qwerty_udt(integer) does not exist > HINT: No function matches the given name and argument types. You may need to > add explicit type casts. Well, doesn't the error message say it ? "function ... does not exist". The question would be why are you doing "qwerty_UDT(123)" in the first place ? It seems you'd want to be casting ? PostgreSQL, in any case, thinks you want to call a function qwerty_UDT(integer) which it can't find. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] order by problem
dear sir, when i run a query with order by customer_name i am getting the following result(ex.). AA A B AC i want it to be A B AA AB how to achieve this. thanks in advance. -Simon Moses. = ** Visit My Home Page http://www.geocities.com/ks_moses updated: 28 Sep 2004. Simon Moses ** __ Do you Yahoo!? Yahoo! Mail - You care about security. So do we. http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] find the "missing" rows
I would like to find the "missing" rows between two sets without using a subselect (or views). This query finds the rows that are in t1 but not in t2. (see the script below for table definitions.) QUERY 1: select * from t1 left join t2 on t1.i = t2.i where t2.i is null The above query is across two tables. I'm having trouble when all the data is in the same table. Here is my attempt but it does not work. I've thought about doing it with views but I really would like to try without views or a subselect... QUERY 2: Select a.i, b.i from t as a left join t as b on a.i = b.i where a.n = 'a' and b.n = 'b' and b.i is null Is there some clever trick get a query working in similar fashion to QUERY 1 but when all the data is in the same table (as in table "t")? Definitions for Query 1 create table t1 (i int); insert into t1 values(1); insert into t1 values(2); insert into t1 values(3); insert into t1 values(4); insert into t1 values(5); create table t2 (i int); insert into t2 values(1); insert into t2 values(2); insert into t2 values(3); insert into t2 values(5); Definitions for Query 2 create table t (n varchar(10), i int); insert into t values('a',1); insert into t values('a',2); insert into t values('a',3); insert into t values('a',4); insert into t values('a',5); insert into t values('b',1); insert into t values('b',2); insert into t values('b',3); insert into t values('b',5); ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] grouping a many to many relation set
Richard Huxton wrote: Johan Henselmans wrote: Hi, I am having a problem grouping a many to many relationship with payments and receipts, where a payment can be for multiple receipts, and a receipt can have multiple payments. I got a list of records that are involved in such relations, but now I don't know how to group them so that all payments and rececipts belonging to the same group are properly grouped. Here's the list: bankbookdetid | receiptid ---+--- 147 |25 157 |25 157 | 622 321 | 100 332 | 101 332 | 100 ... I think what's missing here is the explicit statement of which group these belong in. Without a value to sort/group by, there's nothing for your queries to "get a grip on". So - add a "group_id" column to the bank-book and receipt tables. Create a sequence to generate group id's on demand. Then you'll want a set of triggers that keeps the group details up to date. Of course, groups can shift as you add more records - particularly in the case of two groups merging when you add a "linking" row. Maybe someone smarter than me can come up with a non-procedural solution. Personally, I've got a nagging feeling that this sort of "connectedness" problem is NP, so scaling could be a problem for you. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html Thanks for the reply. Adding a group_id column would defeat the whole purpose of the relational model. I do not want to add a grouping beforehand. The grouping should take place according to certain criteria, in this case: group all the records that have at least one of two attributes in common. I am surprised that I haven't found any reference to such a n:m grouping, while googling. All I found was a description of the problem on can get Johan. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] find the "missing" rows
"Kevin B." <[EMAIL PROTECTED]> writes: > Select a.i, b.i > from t as a > left join t as b on a.i = b.i > where a.n = 'a' and b.n = 'b' and b.i is null This can't succeed since the b.n = 'b' condition is guaranteed to fail when b.* is nulled out ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Found Large Files.. what objects are they?
On Wed, Dec 01, 2004 at 12:07:20PM -0600, Yudie wrote: > I found bunch of large files (more than 1 gb) in one of database directory. > The files looks like this: > 69233123 > 69233123.1 > 69233123.2 > 69233123.3 > 69233123.4 > ...and so on. > > These large files very delay the dumping process. > Anyone know what it could be & how to delete the object related? When a file exceeds 1GB, PostgreSQL divides it into segments named N, N.1, N.2, etc. > How to find a table by oid? contrib/oid2name should be helpful. You could also query the system catalogs: the files' parent directory should be the database's OID, so for a file named .../12345/69233123 you could identify the database with the following query: SELECT datname FROM pg_database WHERE oid = 12345; Connect to that database and find out which table or index uses the files: SELECT relname FROM pg_class WHERE relfilenode = 69233123; Once you've done that, figure out why the files are so large: Could it simply be that the table stores a lot of data? Have you been running VACUUM? If not, and if you've made a lot of updates or deletes, then you'll have a lot of dead tuples; contrib/pgstattuple can be useful for checking on that. If that's the case, then VACUUM FULL and/or REINDEX should recover the dead space, but be aware that they'll acquire exclusive locks on the objects they're working with and they might take a long time (hours) to run. Also, after a REINDEX a table's indexes will probably be stored in different files; you can find out the new file names by querying pg_class and looking at the relfilenode field. Another possibility would be to dump, drop, recreate, and restore the table and see if the file sizes shrink. Again, the file names will probably change, so query pg_class to see what the new ones are. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] invalid 'having' clause
Hi, (B (BJust a quick question out of curiosity, I was just wondering if this is (Bsupposed to be valid sql: (B (Bselect count(*) as cnt (Bfrom sometable (Bgroup by somecolumn (Bhaving cnt > 1 (B (BThis isn't valid in pg (7.4.6), but this is: (B (Bselect count(*) (Bfrom sometable (Bgroup by somecolumn (Bhaving count(*) > 1 (B (BDoes anyone have any idea? (B (BI can't remember what other databases do, and I'm not so familiar with the (Bstandards, but I just thought I'd mention it anyway. (B (Bregards (BIain (B (B (B---(end of broadcast)--- (BTIP 6: Have you searched our list archives? (B (B http://archives.postgresql.org
Re: [SQL] grouping a many to many relation set
On Wed, Dec 01, 2004 at 06:57:54AM +0100, Johan Henselmans wrote: > Richard Huxton wrote: > > > I think what's missing here is the explicit statement of which group > > these belong in. Without a value to sort/group by, there's nothing for > > your queries to "get a grip on". > > > > So - add a "group_id" column to the bank-book and receipt tables. Create > > a sequence to generate group id's on demand. > > Thanks for the reply. Adding a group_id column would defeat the whole > purpose of the relational model. I do not want to add a grouping > beforehand. How is an application going to know which records belong to which groups without a group ID? Or is a group ID acceptable as long as it's not part of the data, but rather generated by the query or function that does the grouping? > The grouping should take place according to certain criteria, in > this case: group all the records that have at least one of two > attributes in common. What about chains like this: bankbookdetid | receiptid ---+--- 100 | 1 100 | 2 101 | 2 101 | 3 102 | 3 102 | 4 Should 1 be grouped with 2, 3, and 4 since 1 has an attribute in common with 2, 2 has an attribute in common with 3, and 3 has an attribute in common with 4? Or doesn't your model permit this situation? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Found Large Files.. what objects are they?
"Yudie" <[EMAIL PROTECTED]> writes: > Anyone know what it could be & how to delete the object related? How to = > find a table by oid? Look at pg_class.relfilenode, not OID. Or try contrib/oid2name. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] invalid 'having' clause
"Iain" <[EMAIL PROTECTED]> writes: > Just a quick question out of curiosity, I was just wondering if this is > supposed to be valid sql: > select count(*) as cnt > from sometable > group by somecolumn > having cnt > 1 No. The HAVING clause logically executes before the SELECT output list does, so it makes no sense for it to refer to the output list entries. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Query is slower
On Wed, Dec 01, 2004 at 05:26:59PM -0200, Sandro Joel Eller wrote: > I have a query using "like" operator (select * from name like 'JOHN%'), > but the table has about 500 hundred records. The has a index (create > index ixcontract_name on contract (name)) , but it is very slow because > it is not using index. How do I do the query to use index? How do you know the query doesn't use an index and that using an index would be faster? Did you run EXPLAIN ANALYZE? If so, then please post the output. It might also be useful to see the difference between a sequential scan and an index scan. Try this: EXPLAIN ANALYZE SELECT * FROM contract WHERE name LIKE 'JOHN%'; SET enable_seqscan TO off; EXPLAIN ANALYZE SELECT * FROM contract WHERE name LIKE 'JOHN%'; If the second query still does a sequential scan then you might have a data type mismatch. Or, if the search string begins with % or _, then the query won't be able to use an index. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] SET AUTOCOMMIT TO OFF
On Wednesday 01 December 2004 18:42, Tom Lane wrote: > Christoph Haller <[EMAIL PROTECTED]> writes: > > Please, could someone point me to the right list > > or tell me how to do a "SET AUTOCOMMIT TO OFF" within libpq. > > libpq does not have any support for that. Does this mean libpq calls always uncommited or commited ? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html