Re: [SQL] Limiting database size
>> Yeah. It's what MS SQL Server does, though. As I >> said, I think the whole concept of limiting database size in >> MB is fundamentally flawed. I mean, what's the database >> supposed to do when it runs out of space? Display a window that says: "Management has determined that this database shall not exceed XXXmb. If your business function requires more disk space - take it up with your manager." Limiting database size is not a problem for developers, unless they are unable/unwilling to explain to their management that disks are a whole lot less expensive than terminating a database application that depends them. But then, M$ never takes any responsibility for the amount of disk space it wastes. terry ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] query optimization question
The query below is slow because both the lots table and the deficiency_table table have thousands of records. Can anyone tell me how to do the second subselect (lot_count) by some method of a join instead of a sub - subselect OR any other method I can use to optimize this query to make it faster? The objective of the query is: Tell me for each project, the total number of deficiencies in the project, and the total number of lots with 1 or more deficiencies in the project. SELECT project_id, marketing_name, (SELECT count(lots.lot_id) AS lot_count FROM deficiency_table AS dt, lots WHERE dt.lot_id = lots.lot_id AND lots.division_id = proj.division_id AND lots.project_id = proj.project_id ) AS def_count, (SELECT count(lots.lot_id) AS lot_counter FROM lots WHERE lots.division_id = proj.division_id AND lots.project_id = proj.project_id AND EXISTS (SELECT 1 FROM deficiency_table AS dt WHERE dt.lot_id = lots.lot_id) ) AS lot_count FROMprojects AS proj WHERE proj.division_id = '#variables.local_division_id#' AND NOT EXISTS (SELECT 1 FROM menu_group_projects WHERE menu_code = 'WA' AND division_id = proj.division_id AND project_id = proj.project_id AND status = 'I') ORDER BY proj.project_id Thanks in advance Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
FW: [SQL] query optimization question
on_id = proj.division_id AND lots.project_id = proj.project_id AND dt.deficiency_status_id = ds.deficiency_status_id AND ds.is_outstanding AND dt.assigned_supplier_id = '101690' ) AS lot_count FROMprojects AS proj WHERE proj.division_id = 'GGH' AND NOT EXISTS (SELECT 1 FROM menu_group_projects WHERE menu_code = 'WA' AND division_id = proj.division_id AND project_id = proj.project_id AND status = 'I') ORDER BY proj.project_id If anyone can see a way to do a group by to do this, then I will be happy to hear about it, because currently the resultset has to do a separate (sequential or index) scan of the deficiencies table. The only way I can see to do a group by would be to break out the aging categories into separate queries, but that wins me nothing because each query then does its own scan... The expected simplified output of this query looks like this: Project <30 30-60 >=60lot total <30 30-60 >=60def total X 1 2 1 4 5 10 5 20 (if X had 4 lots, each of 5 deficiencies) Y 1 1 0 2 3 3 0 6 (each has eg 3 deficiencies in project Y) Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, November 06, 2002 4:54 AM > To: [EMAIL PROTECTED] > Subject: Re: [SQL] query optimization question > > > > Dear Terry, > When I was reading the objective of your query, I expected at > least one > GROUP BY clause within. I do not intend to be offensive - not at all, > but your query very much looks like you're lacking in basic SQL > knowledge > (did you receive any other reply?). > The clause > WHERE ... > AND division_id = proj.division_id AND project_id = > proj.project_id ... > > is leading to a JOIN of your projects table to itself. > I'm pretty sure that's the main reason why the query is slow. > As I understand your database table design, there are relations about > divisions, projects, lots, and deficiencies of lots. And you are > running a master database for all of them. > I've tried to write two queries (see below) to retrieve the > information > you want (BTW I think your first subquery counts the total number of > lots within the project but not the total number of deficiencies). > Both queries may still run slow because three tables have to be joined > (Please try them within the 'psql' interactive terminal first). > Also, they may not work at all (I could not verify them as I did not > know about your CREATE TABLE statements and did not have data to put > in). > I'm willing to help, so if it's not working this information would be > very useful to me. I am no SQL guru, so I cannot see any way to put > these > two into one. But this looks like an interesting task, maybe we should > put this topic to the list again as soon as we make the > single ones run. > > Probably, you'll need to create several indexes to speed up. > > -- for each project, the total number of deficiencies > SELECT p.project_id, p.marketing_name, COUNT(d.lot_id) AS def_count > FROM projects AS p, > lots AS l LEFT JOIN deficiency_table AS d > ON ( d.lot_id = l.lot_id ) > WHEREl.division_id = p.division_id > AND p.division_id = '#variables.local_division_id#' > GROUP BY p.project_id, p.marketing_name ; > > -- for each project, the total number of lots with 1 or more > deficiencies > SELECT p.project_id, p.marketing_name, COUNT(l.lot_id) AS > def_lot_count > FROM projects AS p, > lots AS l LEFT JOIN deficiency_table AS d > ON ( d.lot_id = l.lot_id ) > WHEREl.division_id = p.division_id > AND p.division_id = '#variables.local_division_id#' > GROUP BY p.project_id, p.marketing_name HAVING COUNT(d.lot_id) > 0 ; > > Once again, no offence intended, but I recommend to read a book on SQL > soon. > > Regards, Christoph > > > > > The query below is slow because both the lots table and the > deficiency_table > > table have thousands of records. Can anyone tell me how to do the > second > > subselect (lot_count) by some method of a join instead of a sub - > subselect > > OR any other method I can use to optimize this query to make it > faster? > > > > The objective of the query is: Tell me for each project, the total > number > > of deficiencies in the project, and the total number of > lots with 1 or > more > > def
Re: FW: [SQL] query optimization question
Actually, come to think of it, just the implementation of re-querying a temporary table could alone significantly improve performance, because the temp table would: a) have fewer records to scan on the subselects b) not require any joins Thanks! Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] > [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of Stephan Szabo > Sent: Wednesday, November 06, 2002 11:22 AM > To: [EMAIL PROTECTED] > Cc: Postgresql Sql Group (E-mail) > Subject: Re: FW: [SQL] query optimization question > > > On Wed, 6 Nov 2002 [EMAIL PROTECTED] wrote: > > > However, for the total deficiencies I am then splitting up > the total into > > aging groups, eg <30, 30-60, 60-90, and >90 days old. The > query for that > > looks like the below. But before I paste it in, I would > like to optimize > > it, if I could do so with a group by clause I most > certainly would, but I > > don't see how I can BECAUSE OF THE AGING BREAKDOWN: > > Well, as a first step, I'd suggest using an age function as already > suggested and a temporary table to hold the grouped by values > temporarily > and then doing the subselects against that. > > Maybe something like (untested): > create temp table defs as > select agefunc(dt.days_old_start_date) as ageval, > count(lots.lot_id) as lots from > deficiency_table as dt, lots, deficiency_status as ds > where dt.lot_id = lots.lot_id > and lots.dividion_id=proj.division_id > and lots.project_id=proj.project_id > and dt.deficiency_status_id=ds.deficiency_status_id > and ts.is_outstanding > and dt.assigned_supplier_id='101690' > group by ageval; > > -- same general thing for other repeated queries > > select project_id, marketing_name, > (select sum(lots) from defs) as def_count, > (select lots from defs where ageval=0) as def_count_less_30, > (select lots from defs where ageval=1) as def_count_30_60, > ... > > Since you want 0's instead of nulls, you'd probably need to do > a coalesce for the subselects, and this will go through the > probably 5 or so line temp table rather than the presumably large > other table. > > I haven't spent much thought trying to force it down into a > single query, but that seems a bit harder. > > > ---(end of > broadcast)--- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] query optimization question
That looks really promising as a possibility, however I think you intended to add a group by clause. Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: Masaru Sugawara [mailto:rk73@;sea.plala.or.jp] > Sent: Wednesday, November 06, 2002 11:44 AM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: [SQL] query optimization question > > > On Wed, 6 Nov 2002 09:01:49 -0500 > <[EMAIL PROTECTED]> wrote: > > > If anyone can see a way to do a group by to do this, then I > will be happy to > > hear about it, because currently the resultset has to do a separate > > (sequential or index) scan of the deficiencies table. The > only way I can > > see to do a group by would be to break out the aging categories into > > separate queries, but that wins me nothing because each > query then does its > > own scan... > > > > The expected simplified output of this query looks like this: > > Project <30 30-60 >=60lot total <30 > 30-60 >=60def total > > X 1 2 1 4 5 10 5 > 20(if X had 4 lots, each of 5 deficiencies) > > Y 1 1 0 2 3 3 0 > 6 (each has eg 3 deficiencies in project Y) > > > > > The following query may be one of the ways, but I cannot > confirm whether > it goes well or not. > > > SELECT > project_id, > marketing_name, > COUNT(lots.lot_id) AS def_count, > COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-07'} > THEN lots.lot_id ELSE NULL END) AS > def_count_less_30, > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-07'} > AND dt.days_old_start_date < {d > '2002-09-07'} > THEN lots.lot_id ELSE NULL END) AS > def_count_30_60, > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-07'} > AND dt.days_old_start_date < {d > '2002-08-08'} > THEN lots.lot_id ELSE NULL END) AS > def_count_60_90, > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-08-08'} > THEN lots.lot_id ELSE NULL END) AS > def_count_greater_90, > COUNT(DISTINCT(CASE WHEN >dt.days_old_start_date < > {d '2002-10-07'} > THEN lots.lot_id ELSE NULL END )) > AS lot_count_less_30, >COUNT(DISTINCT(CASE WHEN >dt.days_old_start_date >= > {d '2002-10-07'} >AND dt.days_old_start_date < {d > '2002-09-07'} > THEN lots.lot_id ELSE NULL END )) > AS lot_count_30_60, >COUNT(DISTINCT(CASE WHEN >dt.days_old_start_date >= > {d '2002-09-07'} >AND dt.days_old_start_date < {d > '2002-08-08'} > THEN lots.lot_id ELSE NULL END )) > AS lot_count_60_90, >COUNT(DISTINCT(CASE WHEN > dt.days_old_start_date >= > {d '2002-08-08'} > THEN lots.lot_id ELSE NULL END )) > AS lot_count_greater_90, >COUNT(DISTINCT lots.lot_id) AS lot_count > FROM >(SELECT * FROM deficiency_table >WHERE assigned_supplier_id = '101690') AS dt, >(SELECT * FROM deficiency_status, >WHERE ds.is_outstanding) AS ds, >(SELECT * FROM projects >WHERE proj.division_id = 'GGH') AS proj >lots > WHERE >dt.lot_id = lots.lot_id >AND lots.division_id = proj.division_id >AND lots.project_id = proj.project_id >AND dt.deficiency_status_id = ds.deficiency_status_id >AND NOT EXISTS > (SELECT 1 FROM menu_group_projects > WHERE menu_code = 'WA' > AND division_id = proj.division_id > AND project_id = proj.project_id > AND status = 'I') > ORDER BY proj.project_id > > > > > Regards, > Masaru Sugawara > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] query optimization question
Actually, the ORDER BY *must* be replaced by GROUP BY since it is an aggregate query. I have implemented it, and the results are startling, I get the same value repeated for all projects i.e.: AS1 AS1-AJAX/SALEM SIDE 3 0 6 7 30 0 216 240 AU3 AU3-RIVERIDGE/AURORA 3 0 6 7 30 0 216 240 AV1 AVALON 3 0 6 7 30 0 216 240 AW1 AW1-AJAX/WESTNEY SIDE 3 0 6 7 30 0 216 240 AWM AW MORTGAGE 3 0 6 7 30 0 216 240 AX1 AX1-ROSE PETAL VALLEY DEV INC 3 0 6 7 30 0 216 240 And this appears to be the correct data row for a row that is the first (alphabetically) project that has non zero data in it. This is the final query, can anyone see anything wrong with it?: SELECT projects.project_id, projects.marketing_name, COUNT(lots.lot_id) AS def_count, COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-08'} THEN lots.lot_id ELSE NULL END ) AS def_count_less_30, COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-08'} AND dt.days_old_start_date < {d '2002-09-08'} THEN lots.lot_id ELSE NULL END ) AS def_count_30_60, COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-08'} THEN lots.lot_id ELSE NULL END ) AS def_count_greater_60, COUNT(DISTINCT(CASE WHEN dt.days_old_start_date < {d '2002-10-08'} AND dt.deficiency_status_id = ds.deficiency_status_id THEN lots.lot_id ELSE NULL END) ) AS lot_count_less_30, COUNT(DISTINCT(CASE WHEN dt.days_old_start_date >= {d '2002-10-08'} AND dt.days_old_start_date < {d '2002-09-08'} THEN lots.lot_id ELSE NULL END) ) AS lot_count_30_60, COUNT(DISTINCT(CASE WHEN dt.days_old_start_date >= {d '2002-09-08'} THEN lots.lot_id ELSE NULL END) ) AS lot_count_greater_60, COUNT(DISTINCT lots.lot_id) AS lot_count FROM (SELECT * FROM deficiency_table) AS dt, (SELECT * FROM deficiency_status WHERE is_outstanding) AS ds, (SELECT * FROM projects WHERE division_id = 'GGH') AS proj, (SELECT * FROM lots) AS lots WHERE proj.division_id = 'GGH' AND lots.division_id = proj.division_id AND lots.project_id = proj.project_id AND dt.lot_id = lots.lot_id AND dt.deficiency_status_id = ds.deficiency_status_id AND ds.is_outstanding AND lots.project_id = 'EM16' AND NOT EXISTS (SELECT 1 FROM menu_group_projects WHERE menu_code = 'WA' AND division_id = proj.division_id AND project_id = proj.project_id AND status = 'I') GROUP BY projects.project_id, projects.marketing_name Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED]]On Behalf Of Christoph Haller > Sent: Thursday, November 07, 2002 3:57 AM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: [SQL] query optimization question > > > > SELECT > > project_id, > > marketing_name, > > COUNT(lots.lot_id) AS def_count, > > COUNT(CASE WHEN dt.days_old_start_date < {d '2002-10-07'} > > THEN lots.lot_id ELSE NULL END) AS > def_count_less_30, > > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-10-07'} > > AND dt.days_old_start_date < {d '2002-09-07'} > > THEN lots.lot_id ELSE NULL END) AS > def_count_30_60, > > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-09-07'} > > AND dt.days_old_start_date < {d '2002-08-08'} > > THEN lots.lot_id ELSE NULL END) AS > def_count_60_90, > > COUNT(CASE WHEN dt.days_old_start_date >= {d '2002-08-08'} > > THEN lots.lot_id ELSE NULL END) AS > def_count_greater_90, > > COUNT(DISTINCT(CASE WHEN > > dt.days_old_start_date < {d '2002-10-07'} > > THEN lots.lot_id ELSE NULL END )) AS > lot_count_less_30, > > COUNT(DISTINCT(CASE WHEN > > dt.days_old_start_date >= {d '2002-10-07'} > > AND dt.days_old_start_date < {d '2002-09-07
Re: [SQL] query optimization question
on_id = proj.division_id AND lots.project_id = proj.project_id AND dt.deficiency_status_id = ds.deficiency_status_id AND ds.is_outstanding AND dt.assigned_supplier_id = '101690' ) AS lot_count FROMprojects AS proj WHERE proj.division_id = 'GGH' AND NOT EXISTS (SELECT 1 FROM menu_group_projects WHERE menu_code = 'WA' AND division_id = proj.division_id AND project_id = proj.project_id AND status = 'I') ORDER BY proj.project_id If anyone can see a way to do a group by to do this, then I will be happy to hear about it, because currently the resultset has to do a separate (sequential or index) scan of the deficiencies table. The only way I can see to do a group by would be to break out the aging categories into separate queries, but that wins me nothing because each query then does its own scan... The expected simplified output of this query looks like this: Project <30 30-60 >=60lot total <30 30-60 >=60def total X 1 2 1 4 5 10 5 20 (if X had 4 lots, each of 5 deficiencies) Y 1 1 0 2 3 3 0 6 (each has eg 3 deficiencies in project Y) Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, November 06, 2002 4:54 AM > To: [EMAIL PROTECTED] > Subject: Re: [SQL] query optimization question > > > > Dear Terry, > When I was reading the objective of your query, I expected at > least one > GROUP BY clause within. I do not intend to be offensive - not at all, > but your query very much looks like you're lacking in basic SQL > knowledge > (did you receive any other reply?). > The clause > WHERE ... > AND division_id = proj.division_id AND project_id = > proj.project_id ... > > is leading to a JOIN of your projects table to itself. > I'm pretty sure that's the main reason why the query is slow. > As I understand your database table design, there are relations about > divisions, projects, lots, and deficiencies of lots. And you are > running a master database for all of them. > I've tried to write two queries (see below) to retrieve the > information > you want (BTW I think your first subquery counts the total number of > lots within the project but not the total number of deficiencies). > Both queries may still run slow because three tables have to be joined > (Please try them within the 'psql' interactive terminal first). > Also, they may not work at all (I could not verify them as I did not > know about your CREATE TABLE statements and did not have data to put > in). > I'm willing to help, so if it's not working this information would be > very useful to me. I am no SQL guru, so I cannot see any way to put > these > two into one. But this looks like an interesting task, maybe we should > put this topic to the list again as soon as we make the > single ones run. > > Probably, you'll need to create several indexes to speed up. > > -- for each project, the total number of deficiencies > SELECT p.project_id, p.marketing_name, COUNT(d.lot_id) AS def_count > FROM projects AS p, > lots AS l LEFT JOIN deficiency_table AS d > ON ( d.lot_id = l.lot_id ) > WHEREl.division_id = p.division_id > AND p.division_id = '#variables.local_division_id#' > GROUP BY p.project_id, p.marketing_name ; > > -- for each project, the total number of lots with 1 or more > deficiencies > SELECT p.project_id, p.marketing_name, COUNT(l.lot_id) AS > def_lot_count > FROM projects AS p, > lots AS l LEFT JOIN deficiency_table AS d > ON ( d.lot_id = l.lot_id ) > WHEREl.division_id = p.division_id > AND p.division_id = '#variables.local_division_id#' > GROUP BY p.project_id, p.marketing_name HAVING COUNT(d.lot_id) > 0 ; > > Once again, no offence intended, but I recommend to read a book on SQL > soon. > > Regards, Christoph > > > > > The query below is slow because both the lots table and the > deficiency_table > > table have thousands of records. Can anyone tell me how to do the > second > > subselect (lot_count) by some method of a join instead of a sub - > subselect > > OR any other method I can use to optimize this query to make it > faster? > > > > The objective of the query is: Tell me for each project, the total > number > > of deficiencies in the project, and the total number of > lots with 1 or > more > > def
Re: [SQL] primary keys
A table can only have ONE primary key. It can have additional indexes with a UNIQUE restriction, thereby forcing the second field to be as good as a primary key. That is probably what you are looking for, however, you can also in PG Admin II select multiple fields as being the member of the primary key. Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] > -Original Message- > From: [EMAIL PROTECTED] > [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of Huub > Sent: Wednesday, November 06, 2002 9:19 AM > To: [EMAIL PROTECTED] > Subject: [SQL] primary keys > > > Hi, > > I want to create a table which has 2 columns, and both > columns have to > be primary key (or: together they are the primary key). How can I do > this using SQL? Using pgAdminII for Postgres7.2.2 on RH8. > > Thanks > > Huub > > > ---(end of > broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] NOT and AND problem
DELETE FROM myTable WHERE NOT (SELECT 1 FROM item WHERE myTable.item_id = item.item_id) AND NOT (SELECT 1 FROM ep WHERE myTable.group_id = ep.group_id); Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Richard Jones > Sent: Thursday, July 17, 2003 10:29 AM > To: [EMAIL PROTECTED] > Subject: [SQL] NOT and AND problem > > > Dear All, > > I am having some confusion over a query which is supposed to > achieve the > following: To remove a record from a table if the one or both of the > columns containing references to other tables no longer point > to table rows > which still exist. There are good reasons why I cannot use > foreign keys to > maintain referential integrity, but I will not go into them, > but they give > rise to the need to "clean-up" my database table from time to > time. The > query that I have had most success with looks like this: > > DELETE FROM myTable > WHERE (NOT myTable.item_id = item.item_id) > AND (NOT myTable.group_id = ep.group_id); > > Which is odd, because logically it shouldn't work. What I > find with the > above queries is that as follows: > > let myTable.item_id = item.item_id be A > let myTable.group_id = ep.group_id be B > > The derived and actual truth tables for the results of the > where clause > follow: > > Derived: > A | B | Result > 1 | 1 | 0 > 1 | 0 | 0 > 0 | 1 | 0 > 0 | 0 | 1 > > Actual: > A | B | Result > 1 | 1 | 0 > 1 | 0 | 0 > 0 | 1 | 1 > 0 | 0 | 1 > > This makes no sense to me, as effectively rows 2 and 3 of the > Actual results > truth table are the same (unless there's some subtle > difference with regards > to the order of the statements, otherwise just substitute A > for B and vice > versa). > > The result that I actually want from the operation is this: > > A | B | Result > 1 | 1 | 0 > 1 | 0 | 1 > 0 | 1 | 1 > 0 | 0 | 1 > > which would suggest a query like: > > DELETE FROM myTable > WHERE NOT (myTable.item_id = item.item_id AND myTable.group_id = > ep.group_id); > > which ought to provide the above output. Instead, using this > query, the > output I get is as follows: > > A | B | Result > 1 | 1 | 1 > 1 | 0 | 1 > 0 | 1 | 1 > 0 | 0 | 1 > > I can only conclude that Postgres is doing something with > regards to the > other two tables which I am unaware of. Can anyone help me > understand what > is going on? Any suggestions gratefully received. > > Cheers > > Richard > > > Richard Jones > --- > Systems Developer > Theses Alive! - www.thesesalive.ac.uk > Edinburgh University Library > [EMAIL PROTECTED] > 0131 651 1611 > > > ---(end of > broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to > [EMAIL PROTECTED]) > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] NOT and AND problem
And after reading Viorel's response I realized that you wanted the record deleted if EITHER "foreign key" was broken, not just if BOTH fkeys are broken, therefore simply change the AND to an OR: DELETE FROM myTable WHERE NOT (SELECT 1 FROM item WHERE myTable.item_id = item.item_id) OR NOT (SELECT 1 FROM ep WHERE myTable.group_id = ep.group_id); Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of > [EMAIL PROTECTED] > Sent: Thursday, July 17, 2003 10:29 AM > To: 'Richard Jones'; [EMAIL PROTECTED] > Subject: Re: [SQL] NOT and AND problem > > > DELETE FROM myTable > WHERE NOT (SELECT 1 FROM item WHERE myTable.item_id = item.item_id) > AND NOT (SELECT 1 FROM ep WHERE myTable.group_id = ep.group_id); > > Terry Fielder > Manager Software Development and Deployment > Great Gulf Homes / Ashton Woods Homes > [EMAIL PROTECTED] > Fax: (416) 441-9085 > > > > -Original Message- > > From: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] Behalf Of Richard Jones > > Sent: Thursday, July 17, 2003 10:29 AM > > To: [EMAIL PROTECTED] > > Subject: [SQL] NOT and AND problem > > > > > > Dear All, > > > > I am having some confusion over a query which is supposed to > > achieve the > > following: To remove a record from a table if the one or > both of the > > columns containing references to other tables no longer point > > to table rows > > which still exist. There are good reasons why I cannot use > > foreign keys to > > maintain referential integrity, but I will not go into them, > > but they give > > rise to the need to "clean-up" my database table from time to > > time. The > > query that I have had most success with looks like this: > > > > DELETE FROM myTable > > WHERE (NOT myTable.item_id = item.item_id) > > AND (NOT myTable.group_id = ep.group_id); > > > > Which is odd, because logically it shouldn't work. What I > > find with the > > above queries is that as follows: > > > > let myTable.item_id = item.item_id be A > > let myTable.group_id = ep.group_id be B > > > > The derived and actual truth tables for the results of the > > where clause > > follow: > > > > Derived: > > A | B | Result > > 1 | 1 | 0 > > 1 | 0 | 0 > > 0 | 1 | 0 > > 0 | 0 | 1 > > > > Actual: > > A | B | Result > > 1 | 1 | 0 > > 1 | 0 | 0 > > 0 | 1 | 1 > > 0 | 0 | 1 > > > > This makes no sense to me, as effectively rows 2 and 3 of the > > Actual results > > truth table are the same (unless there's some subtle > > difference with regards > > to the order of the statements, otherwise just substitute A > > for B and vice > > versa). > > > > The result that I actually want from the operation is this: > > > > A | B | Result > > 1 | 1 | 0 > > 1 | 0 | 1 > > 0 | 1 | 1 > > 0 | 0 | 1 > > > > which would suggest a query like: > > > > DELETE FROM myTable > > WHERE NOT (myTable.item_id = item.item_id AND myTable.group_id = > > ep.group_id); > > > > which ought to provide the above output. Instead, using this > > query, the > > output I get is as follows: > > > > A | B | Result > > 1 | 1 | 1 > > 1 | 0 | 1 > > 0 | 1 | 1 > > 0 | 0 | 1 > > > > I can only conclude that Postgres is doing something with > > regards to the > > other two tables which I am unaware of. Can anyone help me > > understand what > > is going on? Any suggestions gratefully received. > > > > Cheers > > > > Richard > > > > > > Richard Jones > > --- > > Systems Developer > > Theses Alive! - www.thesesalive.ac.uk > > Edinburgh University Library > > [EMAIL PROTECTED] > > 0131 651 1611 > > > > > > ---(end of > > broadcast)--- > > TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to > > [EMAIL PROTECTED]) > > > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] OR vs UNION
Actually, I have used a UNION to replace OR's, the case (simpliefied to) something like this: Sample 1: WHERE (f1 = 'v1' OR f1 = '') AND (f2 = 'v2' OR f2 = '') Changed to Sample 2: WHERE (f1 = 'v1') AND (f2 = 'v2') UNION WHERE (f1 = 'v1') AND (f2 = '') UNION WHERE (f1 = '') AND (f2 = '') Note that Sample 1 is actually a simplified version, the queries are not exactly equivalent. The point is that sample 2 ran MUCH faster because: a) The table was *very* large b) The OR clauses of sample 1 prevented the use of an INDEX, Reason: It is faster to scan an index 3 times then scan this very large table once. I do not know if there is a proof to say that one can *always* replace OR's with a union, but sometimes certainly, and in this case it made things much better... Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus > Sent: Thursday, July 17, 2003 3:00 PM > To: Scott Cain; [EMAIL PROTECTED] > Subject: Re: [SQL] OR vs UNION > > > Scott, > > > I have a query that uses a series of ORs and I have heard > that sometimes > > this type of query can be rewritten to use UNION instead and be more > > efficient. > > I'd be interested to know where you heard that; as far as I > know, it could > only apply to conditional left outer joins. > > > select distinct > f.name,fl.fmin,fl.fmax,fl.strand,f.type_id,f.feature_id > > from feature f, featureloc fl > > where > >(f.type_id = 219 OR > > f.type_id = 368 OR > > f.type_id = 514 OR > > f.type_id = 475 OR > > f.type_id = 426 OR > > f.type_id = 456 OR > > f.type_id = 461 OR > > f.type_id = 553 OR > > f.type_id = 89) and > > fl.srcfeature_id = 1 and > > f.feature_id = fl.feature_id and > > fl.fmin <= 2491413 and fl.fmax >= 2485521 > > Certainly a query of the above form would not benefit from > being a union. > > For readability, you could use an IN() statement rather than > a bunch of ORs > ... this would not help performance, but would make your > query easier to > type/read. > > -- > -Josh Berkus > Aglio Database Solutions > San Francisco > > > ---(end of > broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Select and functions
You need to use 2 single quotes: CREATE OR REPLACE FUNCTION staging.write_work_country() RETURNS trigger AS ' DECLARE alias_rec RECORD; BEGIN -- Clean the phone number NEW.worktelephonenumber = REPLACE(NEW.worktelephonenumber,'' '',''''); RETURN NEW; END; ' Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Stephen > J. Thompson > Sent: Wednesday, November 05, 2003 9:26 AM > To: [EMAIL PROTECTED] > Subject: [SQL] Select and functions > > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hello all, > > I am having a little bit of trouble here. > > If I use the replace function in a select statement all works > as expected. > > select firstname, lastname, replace(worktelephonenumber, ' ', > '') AS worktel > from contacts; > > But if I use it in a before trigger the procedure fails to work. > > CREATE OR REPLACE FUNCTION staging.write_work_country() > RETURNS trigger AS > ' > DECLARE > alias_rec RECORD; > BEGIN > -- Clean the phone number > NEW.worktelephonenumber = > REPLACE(NEW.worktelephonenumber,' ',''); > RETURN NEW; > END; > ' > > I get the following error: > > ERROR: parser: parse error at or near "',''); > > Can anyone help me please? > > Thanks. > > Stephen. > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.2.3 (GNU/Linux) > > iD8DBQE/qQiKDAwPLUWQb6sRAhNpAJ0QdVzmYdhAhFVXf06v5zACuS3y9wCfSSlS > W18AFuYb4J1le0W32uuIJto= > =xe/b > -END PGP SIGNATURE- > > > ---(end of > broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so > that your > message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] How to quote date value?
If you want an explicit date, then cast it like this: SELECT '1/11/2003'::date AS "InvoiceDate"; Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Stephan Szabo > Sent: Friday, November 21, 2003 2:04 PM > To: [EMAIL PROTECTED] > Subject: Re: [SQL] How to quote date value? > > > > On Fri, 21 Nov 2003, Gaetano Mendola wrote: > > > nobody wrote: > > > I have found it in documentation, it is single quote. But > it does not > > > explain why > > > > > > SELECT '1/11/2003' AS "InvoiceDate"; > > > > > > returns "unknown" data type instead of "date". > > (I haven't seen the original message yet, so I'm replying to a reply) > Date literals are generally written as: > DATE '1/11/2003' > > PostgreSQL will try to guess what type you meant with quoted > strings in > expressions, but in the above there isn't enough context to > do guess that > you meant a date really (it should probably actually be > thought of as a > string in such cases). > > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] Return relation table data in a single value CSV
I should probably be punished for even asking this question, but a simplified version of what I want is this... I have 2 tables: floorplans floorplan_id | description -- 2240 | test floorplan and a table elevations floorplan_id | elevation 2240 | A 2240 | B 2240 | C I want to perform a query that returns this result set: baseplan_id | elevations 2240| A,B,C The real query is, of course, *much* more complex then that, as there are many more fields in floorplans, and floorplans is joined to other tables. Currently I return my floorplan information, then perform a second query to get the elevation records, and loop over the second query to compile the comma separated list of elevations. I have tried subselects concatenated with basically || ',' || where each subselect does an OFFSET X LIMIT 1, and the ',' is wrapped with a case statement to hide the comma if there are no further elevations. It gets very messy very fast as and further I end up hard coding the max number of elevations. Any ideas? Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] CHAR(n) always trims trailing spaces in 7.4
> select 'x'||' '||'x' > > should produce xx, but it produces x x. > INCORRECT This select 'x'||' '::char ||'x' Should produce xx This select 'x'||' '||'x' is restateable as select 'x'|| ' '::text ||'x' And the || operand for text is not dropping the extra spaces hence correctly x x Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of scott.marlowe > Sent: Tuesday, February 17, 2004 7:07 PM > To: Tom Lane > Cc: elein; news.postgresql.org; [EMAIL PROTECTED] > Subject: Re: [SQL] CHAR(n) always trims trailing spaces in 7.4 > > > On Tue, 17 Feb 2004, Tom Lane wrote: > > > elein <[EMAIL PROTECTED]> writes: > > > This is an example of the problem. It used to expand > > > the middle thing to 15. > > > > > elein=# select 'x' || ' '::char(15) || 'x'; > > > ?column? > > > -- > > > xx > > > (1 row) > > > > Still does, but then the spaces go away again when the > value goes into > > the concatenation, because concatenation is a text operator. > > But then this: > > select 'x'||' '||'x' > > should produce xx, but it produces x x. > > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Trying to make efficient "all vendors who can provide all items"
My mind is drawing a blank. Please consider: TABLE 1: items: list of items in catalog item_id | item_description TABLE 2: vendors: list of vendors who provide 1 or more items vendor_id | vendor_name TABLE 3: item_vendors: record existence indicates vendor can provide item item_id | vendor_id QUESTION: I have a list of say 5 items, and I want to find all vendors who can provide ALL 5 items Solution 1: SELECT vendor_id FROM vendors WHERE EXISTS (SELECT 1 FROM item_vendors WHERE vendors.vendor_id = item_vendors.vendor_id AND item_id = 'item_1') AND EXISTS (SELECT 1 FROM item_vendors WHERE vendors.vendor_id = item_vendors.vendor_id AND item_id = 'item_2') AND EXISTS (SELECT 1 FROM item_vendors WHERE vendors.vendor_id = item_vendors.vendor_id AND item_id = 'item_3') AND EXISTS (SELECT 1 FROM item_vendors WHERE vendors.vendor_id = item_vendors.vendor_id AND item_id = 'item_4') AND EXISTS (SELECT 1 FROM item_vendors WHERE vendors.vendor_id = item_vendors.vendor_id AND item_id = 'item_5') Solution 2: SELECT vendors.vendor_id FROM vendors, items AS item_1, items AS item_2, items AS item_3, items AS item_4, items AS item_5 WHERE items_1.vendor_id = vendors.vendor_id AND items_1.item_id = 'item_1' AND items_2.vendor_id = vendors.vendor_id AND items_2.item_id = 'item_2' AND items_3.vendor_id = vendors.vendor_id AND items_2.item_id = 'item_3' AND items_4.vendor_id = vendors.vendor_id AND items_2.item_id = 'item_4' AND items_5.vendor_id = vendors.vendor_id AND items_2.item_id = 'item_5' Yep, both my solutions are pretty ugly, especially in situations where my list of items that need to be provided grow large. There must be a better way. Can anyone help me with this? Thanks Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Trying to make efficient "all vendors who can provide all items"
Thanks for your response. If I understand your proposal, it is a way of getting vendors who can provide me with all the items in the items table. But the situation I have is items table could have 100k items, and I want all vendors who can provide a specific list of say 20 items. Do I misunderstand your query? Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: Jeremy Semeiks [mailto:[EMAIL PROTECTED] > Sent: Monday, March 08, 2004 2:07 PM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: [SQL] Trying to make efficient "all vendors who > can provide > all items" > > > On Mon, Mar 08, 2004 at 11:02:13AM -0500, > [EMAIL PROTECTED] wrote: > > My mind is drawing a blank. Please consider: > > TABLE 1: items: list of items in catalog > > item_id | item_description > > > > TABLE 2: vendors: list of vendors who provide 1 or more items > > vendor_id | vendor_name > > > > TABLE 3: item_vendors: record existence indicates vendor > can provide item > > item_id | vendor_id > > > > > > QUESTION: > > I have a list of say 5 items, and I want to find all > vendors who can provide > > ALL 5 items > [...] > > Yep, both my solutions are pretty ugly, especially in > situations where my > > list of items that need to be provided grow large. > > > > There must be a better way. Can anyone help me with this? > > You could use some subselects: > > select vendor_id from > (select vendor_id, count(*) as ct from item_vendors group by > vendor_id) vict > where ct = (select count(*) from items); > > I haven't tested this. > > - Jeremy > ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Trying to make efficient "all vendors who can provide all items"
That's pretty nifty code. It certainly looks nicer, and looks like it would work providing vendor_id&item_id is the pk of item_vendors (and it is). I will let you know if it runs any faster... Thanks Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: Matt Chatterley [mailto:[EMAIL PROTECTED] > Sent: Monday, March 08, 2004 3:41 PM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: RE: [SQL] Trying to make efficient "all vendors who > can provide > all items" > > > Hmm. My PGSQL knowledge is rusty, so this may be slightly > microsoftified.. > > How about just: > > SELECT V.VendorID, V.VendorName, COUNT(IV.ItemID) > FROM Vendor V > INNER JOIN Item_Vendor IV ON IV.VendorID = V.VendorID AND > IV.ItemID IN (1, > 2, 3, 4, 5) > GROUP BY V.VendorID, V.VendorName > HAVING COUNT(IV.ItemID) = 5 > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Trying to make efficient "all vendors who can provide all items"
Of all the proposed solutions, this appears to run the fastest, and not require the creation of an additional table. Thanks! Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Matt Chatterley > Sent: Monday, March 08, 2004 3:41 PM > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: [SQL] Trying to make efficient "all vendors who > can provide > all items" > > > Hmm. My PGSQL knowledge is rusty, so this may be slightly > microsoftified.. > > How about just: > > SELECT V.VendorID, V.VendorName, COUNT(IV.ItemID) > FROM Vendor V > INNER JOIN Item_Vendor IV ON IV.VendorID = V.VendorID AND > IV.ItemID IN (1, > 2, 3, 4, 5) > GROUP BY V.VendorID, V.VendorName > HAVING COUNT(IV.ItemID) = 5 > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] "=" operator vs. "IS"
Think about a join where you do something like: t1.f1 = t2.f1 If both columns were blank, would you want the join to succeed? Probably not, if you did, then you would potentially have a cartesian select. This is not a good explanation of NULL non-equality, but I thought it might be useful. NOTE: A related topic is OUTER JOIN's which is how the above join would properly be implemented Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Michael A Nachbaur > Sent: Monday, June 28, 2004 6:28 PM > To: Stefan Weiss > Cc: [EMAIL PROTECTED] > Subject: Re: [SQL] "=" operator vs. "IS" > > > On June 28, 2004 03:02 pm, Stefan Weiss wrote: > > I'm just curious - why is it not possible to use the "=" operator to > > compare values with NULL? I suspect that the SQL standard specified > > it that way, but I can't see any ambiguity in an expression > like "AND > > foo.bar = NULL". Is it because NULL does not "equal" any > value, and the > > expression should be read as "foo.bar is unknown"? Or is > there something > > else I'm missing? > > As far as I have been able to tell, it is one of those quirks > about SQL that > you shouldn't bother trying to understand. It just IS. > > -- > Michael A. Nachbaur <[EMAIL PROTECTED]> > http://nachbaur.com/pgpkey.asc > > ---(end of > broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] next integer in serial key
Actually it does work, call nextval to get your next value, then call your INSERT statement, explicitly giving said value for the serial column. Then you can proceed with using said value in the INSERT statement of the related inserts with foreign keys to it. Alternatively, you can do: INSERT (accepting the default) then SELECT currval(the_sequence_object); then NOTE: 2nd method assumes that nobody else called nextval() on the sequence between when you did the insert and when you did the select currval(). Note that being inside a transaction is NOT sufficient, you need an explicit lock on the sequence. I do not recommend the 2nd method, too much can go wrong. Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: Kenneth Gonsalves [mailto:[EMAIL PROTECTED] > Sent: Thursday, July 22, 2004 12:13 AM > To: [EMAIL PROTECTED] > Subject: Re: [SQL] next integer in serial key > > > On Thursday 22 July 2004 10:25 am, you wrote: > > The same way the default value is defined, which you can > find by doing: > > \d tablename > > > > Which usually gives something like: > > Table > "public.gbs_floorplans" > > Column| Type | > > > Modifiers > > > > > --+---+--- > - > >- - > > floorplan_id | integer | not null default > > nextval('public.gbs_floorplans_floorplan_id_seq'::text) > > division_id | character(3) | not null > > floorplan_display_id | character(10) | not null > > > > Hence > > SELECT nextval('public.gbs_floorplans_floorplan_id_seq'::text) > > nope. what happens is that i enter data into a table with a > serial type id, > and then use that id as a foreign key to enter data into > another table. so i > need to know the id for the second entry. i commit after both entries > succeed. If i use nextval to find the id, this increments the > id, which will > defeat the purpose. > -- > regards > kg > > http://www.onlineindianhotels.net - hotel bookings > reservations in over 4600 > hotels in India > http://www.ootygolfclub.org > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] next integer in serial key
> > Alternatively, you can do: > > INSERT (accepting the default) > > then SELECT currval(the_sequence_object); > > then > > > > NOTE: 2nd method assumes that nobody else called nextval() on the > > sequence between when you did the > > insert and when you did the select currval(). Note that > being inside > > a transaction is NOT > > sufficient, you need an explicit lock on the sequence. I do not > > recommend the 2nd method, too much > > can go wrong. > > This last paragraph is wrong and irrelevant. It is a point which for > some reason is continually being misunderstood. > > currval() *always* returns the last value generated for the > sequence in > the *current session*. It is specifically designed to do what you are > suggesting without any conflict with other sessions. There is *never* > any risk of getting a value that nextval() returned to some > other user's > session. That statement depends on different factors. If you for example have an application server, and the database connection is shared across multiple application server clients (or the query results get cached by your application server, Ugh!), the statement IS valid: I encountered this issue 2 years ago with coldfusion 4.5 using the unixODBC driver against Postgres 7.1.1 So without knowing his architecture, I needed to state that caveat, albeit rare. Even with knowing the architecture, the point still holds that you need to call currval() before another insert (or any call to nextval) is made. That probably should have been clearer, sorry. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] next integer in serial key
That usually works. But if you can have 2 records in that table that are identical except the serial column, your query to get the id will return 2 results. Its also inefficient, if that query is costly (whether or not it can return 2 results). That's why I do: SELECT nextval(my_tables_sequence) AS next_id; INSERT INTO mytable (serial_column, data_columns...) VALUES (next_id, data_columns...) INSERT INTO related_table (fkey_column, other_columns...) VALUES (next_id, other_columns...) You can even do ALL that inside a transaction which guarantees that either: 1) ALL of the inserts are done OR 2) NONE of the inserts are done (Note it doesn't roll back the sequence, that id on rollback would become unused) Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: Kenneth Gonsalves [mailto:[EMAIL PROTECTED] > Sent: Thursday, July 22, 2004 7:52 AM > To: Oliver Elphick; [EMAIL PROTECTED] > Cc: Postgresql Sql Group (E-mail) > Subject: Re: [SQL] next integer in serial key > > > On Thursday 22 July 2004 05:45 pm, Oliver Elphick wrote: > > On Thu, 2004-07-22 at 12:48, [EMAIL PROTECTED] wrote: > > > Actually it does work, call nextval to get your next > value, then call > > > your INSERT statement, > > > explicitly giving said value for the serial column. Then you can > > > proceed with using said value in > > > the INSERT statement of the related inserts with foreign > keys to it. > > > > > > Alternatively, you can do: > > > INSERT (accepting the default) > > > then SELECT currval(the_sequence_object); > > > then > > > > > > NOTE: 2nd method assumes that nobody else called nextval() on the > > > sequence between when you did the > > > insert and when you did the select currval(). Note that > being inside > > > a transaction is NOT > > > sufficient, you need an explicit lock on the sequence. I do not > > > recommend the 2nd method, too much > > > can go wrong. > > > > This last paragraph is wrong and irrelevant. It is a point > which for > > some reason is continually being misunderstood. > > > > currval() *always* returns the last value generated for the > sequence in > > the *current session*. It is specifically designed to do > what you are > > suggesting without any conflict with other sessions. There > is *never* > > any risk of getting a value that nextval() returned to some > other user's > > session. > > > > The downside is that it operates outside the transaction > and therefore > > cannot be rolled back. It is also necessary to run nextval() in the > > session (either explicitly or by letting a serial column take its > > default) before you can use currval() on the sequence. > > in short, the only safe way of doing this is to commit on > insert to the main > table and then query it to get the value to insert in the > other tables - and > if the subsequent inserts fail .. > -- > regards > kg > > http://www.onlineindianhotels.net - hotel bookings > reservations in over 4600 > hotels in India > http://www.ootygolfclub.org > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] next integer in serial key
The same way the default value is defined, which you can find by doing: \d tablename Which usually gives something like: Table "public.gbs_floorplans" Column| Type |Modifiers --+---+- - floorplan_id | integer | not null default nextval('public.gbs_floorplans_floorplan_id_seq'::text) division_id | character(3) | not null floorplan_display_id | character(10) | not null Hence SELECT nextval('public.gbs_floorplans_floorplan_id_seq'::text) Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Kenneth Gonsalves > Sent: Wednesday, July 21, 2004 10:46 PM > To: [EMAIL PROTECTED] > Subject: [SQL] next integer in serial key > > > hi, > how does one get the next number in a serial type row? > -- > regards > kg > > http://www.onlineindianhotels.net - hotel bookings > reservations in over 4600 > hotels in India > http://www.ootygolfclub.org > > ---(end of > broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] ORDER BY and NULLs
Use the coalesce() function. (coalesce returns the first non-null value in its list) Specifically ORDER BY coalesce("TO", 0), "FROM" If you have records in "TO" column whose values is LESS then 0, then you need to replace 0 with something that sorts BEFORE the first most value that your TO result can return. Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of T E Schmitz > Sent: Sunday, September 19, 2004 10:58 AM > To: [EMAIL PROTECTED] > Subject: [SQL] ORDER BY and NULLs > > > Hello, > > I am using PostgreSQL 7.4.2 and as I understand NULL values > always sort > last. > > However, I have a table from which select using two numerical > sort keys > "FROM" and "TO". "TO" might be NULL and I would like to display those > rows first (without sorting the column in descending order). > > Is there any way this can be achieved without inserting bogus values > into that column? > > -- > > > Regards/Gruß, > > Tarlika Elisabeth Schmitz > > ---(end of > broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] NULLS and string concatenation
There is an easy solution anyway, use coalesce to ensure you are never returning a null result for any components of the concat. e.g. select 'some text, blah:' || coalesce(NULL, '') equates to 'some text, blah:' || '' hence 'some text, blah:' Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Stephan Szabo > Sent: Friday, November 19, 2004 2:04 PM > To: Don Drake > Cc: Richard Huxton; [EMAIL PROTECTED] > Subject: Re: [SQL] NULLS and string concatenation > > > > On Fri, 19 Nov 2004, Don Drake wrote: > > > On Fri, 19 Nov 2004 17:48:34 +, Richard Huxton > <[EMAIL PROTECTED]> wrote: > > > Don Drake wrote: > > > > select 'some text, should be null:'|| NULL > > > > > > > > This returns NULL and no other text. Why is that? I > wasn't expecting > > > > the "some text.." to disappear altogether. > > > > > > > > Is this a bug? > > > > > > No. Null is "unknown" if you append unknown (null) to a > piece of text, > > > the result is unknown (null) too. > > > > > > If you're using NULL to mean something other than > unknown, you probably > > > want to re-examine your reasons why. > > > > > > > I'm using NULL to mean no value. Logically, NULL is > unknown, I agree. > > > > I'm trying to dynamically create an INSERT statement in a function > > that sometimes receives NULL values. > > > > This is still strange to me. In Oracle, the same query would not > > replace the *entire* string with a NULL, it treats the NULL as a no > > value. > > Oracle has some incompatibilities with the SQL spec (at least > 92/99) wrt > NULLs and empty strings so it isn't a good comparison point. > The spec is > pretty clear that if either argument to concatenation is NULL > the output > is NULL. > > > I can't find in the documentation where string concatenation of any > > string and NULL is NULL. > > I'm not sure it does actually. I'd have expected to see some > general text > on how most operators return NULL for NULL input but a quick > scan didn't > find any. > > > ---(end of > broadcast)--- > TIP 9: the planner will ignore your desire to choose an index > scan if your > joining column's datatypes do not match > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] SQL copy from csv with explicit field ordering
I guess this is really a SQL question: I have a csv that I want to import, but the csv has different column ordering. I have tried putting the column names in the first row, but then the copy command fails on field which is data type (eg it is seeing the cells in first row as data, not header info). ggtest=> copy ap_trans from '/var/dbase/ap_trans.csv' using delimiters ','; ERROR: pg_atoi: error in "id": can't parse "id" The help indicates: ggtest=> \h copy Command: copy Description: copy data to and from a table Syntax: COPY [BINARY] class_name [WITH OIDS] TO|FROM filename|STDIN|STDOUT [USING DELIMITERS 'delim']; I have tried WITH OIDS but with same results. Is there somewhere that I can either enable the first line of CSV as header names OR Can I explicitly define my import field ordering from within the select statement? Thanks Terry Fielder [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] Restricting a VIEW.
Hello all, I have a doubt. In the next example, I have a table with two columns: - DATE - MONEY And a VIEW which SUM's the money GROUPing by 'month/year' (I cut off the day)... Ok.. I would like to be able to SELECT * FROM VIEW.. but restricting by complete dates (dd/mm/)... (Last select in the example) I think it isn't possible, but I would like to know your opinion... Or if there is any workaround... Best regards.. --== DROP TABLE ty_test; CREATE TABLE ty_test (datein date NOT NULL, money numeric(6,2) NOT NULL, PRIMARY KEY (datein) ) WITHOUT OIDS; INSERT INTO ty_test VALUES ('2002/10/01',10); INSERT INTO ty_test VALUES ('2002/10/15',20); INSERT INTO ty_test VALUES ('2002/11/15',30); DROP VIEW vw_ty_test; CREATE VIEW vw_ty_test AS SELECT TO_CHAR(datein,'MM/') AS datein2, SUM(money) FROM ty_test GROUP BY datein2; SELECT * FROM ty_test; -- All rows from table. SELECT * FROM vw_ty_test; -- All rows from view. SELECT * FROM vw_ty_test WHERE datein BETWEEN '2002/10/01' AND '2002/10/9'; --== ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Comparing Numeric and Double Precision (float8)..
Hello all, i DON'T know what is the proper forum to throw this question and I must to insist in this "feature". Sorry. I have a lot of tables from Oracle 8i Databases with a lot of columns with numeric(x,0) definition. Ok.. I am traslating my oracle tables to PostgreSQL tables. But I am having a serious problem with my client aplications. When I compare a numeric(x,0) field with a float8 field I have an error on PostgreSQL what I didn't have with Oracle. I mean: CREATE test (one numeric(2,0)); SELECT * FROM test WHERE one = 1.0; This runs fine on my Oracle Systems.. but I have problems with my PostgreSQL system. I have tried to create an operator to workaround this inconvenience: numeric '=' float8 with CREATE OPERATOR command and calling to a function to return a boolean. Ok.. great.. It is running now. But when it runs.. I have another problems comparing numeric with integers and so on. So I must to DROP OPERATOR.. I don't understand what is the problem and what options I have to workaround it (without re-write a lot of client applications). I have a lot of code I don't want to modify. The question is: Why we cannot compare numeric with double precision ? And why Oracle or SQL can do it without problems ? Thanks a lot. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Comparing Numeric and Double Precision (float8)..
Great I don't know if my customers can wait until 7.3 official release, but I'll try to distract them a bit... :-\ Thanks a lot Bruno... Bruno Wolff III wrote: > > On Mon, Nov 04, 2002 at 09:11:30 +0100, > Terry Yapt <[EMAIL PROTECTED]> wrote: > > > > When I compare a numeric(x,0) field with a float8 field I have an error > > on PostgreSQL what I didn't have with Oracle. I mean: > > > > CREATE test (one numeric(2,0)); > > > > SELECT * FROM test WHERE one = 1.0; > > With 7.3b3 the above works after correcting the create statement. > bruno=> create table test (one numeric(2,0)); > CREATE TABLE > bruno=> SELECT * FROM test WHERE one = 1.0; > one > - > (0 rows) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Parameterized Views on 7.3
Hello all, Are parameterized views already fully functional in 7.3 or I must use fuctions returning rows set ? I have been looking for that in developer documentacion /7.3.1) and in the google groups but I haven't had so many luck. Thanks. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Formatting Functions and Group By
Hi, I'm porting a bunch of queries from MySQL to Postgres 7.4 and am having a problem with one particular area. For example, a query like this works in MySQL: select to_char(myCol,'Mon YY') from myTable group by to_char(myCol,'MM ') order by to_char(myCol,'MM ') Postgres will give me an error saying that "to_char(myCol,'Mon YY')" must be in the 'group by'. I understand why that normally needs to be the case, but in theory, it would be ok here because they are actually the same values (in the select and group by) but just formatted differently. I just want the query to be grouped and ordered by month and year, but to be formatted differently in the output. Any ideas? Thanks!! __ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Formatting Functions and Group By
Thank you both for your responses. That's just what I needed and thanks for catching my mistake Tom. And may I say that I am VERY happy to be moving to Postgres. The lack of a native Win32 version was thing only thing holding us back from Postgres previously. I think this is the only kind of query I will have had difficulty porting. I'm looking forward to ditching MySQL and never looking back!! :) --- Tom Lane <[EMAIL PROTECTED]> wrote: > > Ah, good ol' MySQL :-( ... let the user do what he wants whether the > result is well defined or not ... > > I'd suggest doing the grouping/ordering numerically rather than > textually. For instance, > > select > to_char(date_trunc('month', myCol), 'Mon YY') > from > myTable > group by > date_trunc('month', myCol) > order by > date_trunc('month', myCol); > > Now this assumes you really want a time-based ordering, which the quoted > example doesn't give --- you've got month sorting to the left of year, > is that really what you want? If it is then you'd need to go > > group by > date_trunc('month', myCol) > order by > to_char(date_trunc('month', myCol), 'MM ') > > regards, tom lane __ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] query
now() returns a timestamp. Cast it to a date and then you can subtract days. e.g. select now()::date -1 Terry Chandan_Kumaraiah wrote: Hi, In oracle we write sysdate-1 For example,we write a query (select * from table1 where created_date>=sysdate-1).Whats its equivalent in postgre? Chandan -- Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085
Re: [SQL] NULL in IN clause
Havasvölgyi Ottó wrote: Hi, I have just run this command on 8.0.4 : SELECT 'foo' WHERE 0 NOT IN (NULL, 1); 0 <> NULL (Indeed nothing equals NULL, other then sometimes NULL itself) 0 <> 1 Therefore, the statement: 0 NOT IN (NULL, 1) Should always equate to false. Therefore No rows returned. Ever. Terry And it resulted is zero rows. Without NULL it is OK. Is this a bug, or the standard has such a rule? Best Regards, Otto ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] casting character varying to integer - order by numeric
Check out the function to_number() In particular here's an example... If a field named section is text containing numbers: ORDER BY to_number(t.section, text()) If the field can also contain non-numerals such as 3a, 3b, and you want 3a to show first then do this: ORDER BY to_number(t.section, text()), t.section And if the field section can actually START with an alpha, then to prevent to_number from failing do this: to_number(textcat('0', t.section), text()), t.section Terry Bryce W Nesbitt wrote: How can I force a character field to sort as a numeric field? I've got something like this: Postgres=> SELECT username,last_name FROM eg_member ORDER BY username; --+--- 0120 | Foley 1| Sullivan 10 | Guest 11 | User (5 rows) (I can't change the field type). I tried: SELECT username,last_name FROM eg_member ORDER BY username::integer; But postgres 7 rejects this with "ERROR: cannot cast type character varying to integer". Is there a way to force numeric sort order? I tried a variety of functions, such as to_char() and convert() without any luck. Thanks for your insight! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] migrating numeric to serial from MSSQL to postgres
I believe: IDENTITY(1, 1) just means "Primary Key" in M$SQL numeric 18,0 means a numeric field of zero decimal points. Hence we are looking at a 18 byte integer. bigint is not big enough, so probably should use the same in numeric 18,0 in postgres There may be a way to get MSSQL to dump a SQL compliant dump, which would make a migration to postgres much easier if your schema is large. Without a SQL compliant dump, you have a lot of cleaning up/tweaking the dump to make it readable by Postgres (but that's what I have done the few times in the past I have had to do that, fortunately not for many statements :) Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 Kenneth Gonsalves wrote: hi, am migrating a database from MSSQL to postgres. How would i migrate this: [Id] [numerc](18, 0) IDENTITY (1, 1) -- regards kg http://lawgon.livejournal.com http://nrcfosshelpline.in/web/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] Which function transform x,y to latitude/longitude?
Hi, For example x= 38356.62 y= 42365.19.how to transform it to latitude 1.399948, longitude 193.92644? Which function I could use? I don’t know the algorithm. Initial data is: ("+proj=cass +a=6378137.0 +rf=298.257223563 +lat_0=1.287639n +lon_0=103.8516e +x_0=3 +y_0=3"). Thanks a lot. bill _ 与联机的朋友进行交流,请使用 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Which function transform x,y to latitude/longitude?
I just know the correct data must be longitude 103.926669,latitude0.111827. x,y from Cassini system. Could you give me the source code how you calculate.Thanks a lot! From: Michael Fuhr <[EMAIL PROTECTED]> To: Nemo Terry <[EMAIL PROTECTED]> CC: pgsql-sql@postgresql.org Subject: Re: [SQL] Which function transform x,y to latitude/longitude? Date: Wed, 25 Apr 2007 08:16:56 -0600 On Wed, Apr 25, 2007 at 05:02:02PM +0800, Nemo Terry wrote: > For example x= 38356.62 y= 42365.19.how to transform it to latitude > 1.399948, longitude 193.92644? Do you mean longitude 103.92644? In what datum are the lat/lon coordinates? Where did you get the transformation in your example? > Which function I could use? I don���t know the algorithm. > > Initial data is: ("+proj=cass +a=6378137.0 +rf=298.257223563 > +lat_0=1.287639n +lon_0=103.8516e +x_0=3 +y_0=3"). What's the source of these parameters? You can perform transformations outside the database with PROJ.4 or inside the database with the PostGIS transform() function (which uses PROJ.4). http://proj.maptools.org/ http://postgis.refractions.net/ I don't see any exact matches in the PostGIS spatial_ref_sys table for the parameters you posted you so if you use PostGIS then you might have to insert a row to create your own spatial reference system. However, I did a few tests with your parameters and various datums for the lat/lon and couldn't get the exact transformed values in your example. You might get more help on the PROJ.4 and PostGIS mailing lists. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq _ 与世界各地的朋友进行交流,免费下载 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] How to use function PointN?
select PointN(envelope(polyline),1) from highway; return null,why? _ 享用世界上最大的电子邮件系统― MSN Hotmail。 http://www.hotmail.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] How to process inverted comma in "EXECUTE 'insert into xxx values(...)';"?
Look at this problem: when execute 'insert into lse_installations values(' || ||obj_id|| || ',' || ||div|| || ',' || ||sub|| || ',' || ||obj_type|| || ',' || ||obj_name|| || ',' || ||pstcd|| || ',' || ||rdcd|| || ',' || ||blkno|| || ',' || ||vldunt|| || ','|| cenlat || ',' || cenlon || ')'; because obj_name from another table has value like this:S'pore High Polymer. Following error raises: ERROR: syntax error at or near "pore" SQL state: 42601 Context: PL/pgSQL function "lse_installations" line 64 at execute statement So how to process the single inverted comma in char variable?It makes me so desperate. _ 与世界各地的朋友进行交流,免费下载 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] How to process inverted comma in "EXECUTE 'insert into xxx values(...)
But I must use it in function,so... Do you have another solution? From: "Rodrigo De Le�n" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org CC: "Nemo Terry" <[EMAIL PROTECTED]> Subject: Re: [SQL] How to process inverted comma in "EXECUTE 'insert into xxx values(...)';"? Date: Tue, 15 May 2007 01:25:25 -0500 On 5/14/07, Nemo Terry <[EMAIL PROTECTED]> wrote: Look at this problem: when execute 'insert into lse_installations values(' || ''''||obj_id||'''' || ',' || ''''||div||'''' || ',' || ''''||sub||'''' || ',' || ''''||obj_type||'''' || ',' || ''''||obj_name||'''' || ',' || ''''||pstcd||'''' || ',' || ''''||rdcd||'''' || ',' || ''''||blkno||'''' || ',' || ''''||vldunt||'''' || ','|| cenlat || ',' || cenlon || ')'; because obj_name from another table has value like this:S'pore High Polymer. Following error raises: ERROR: syntax error at or near "pore" SQL state: 42601 Context: PL/pgSQL function "lse_installations" line 64 at execute statement So how to process the single inverted comma in char variable?It makes me so desperate. Why are you EXECUTEing the INSERT command? It's directly supported in plpgsql, since it is a superset of SQL. That is, you can do: INSERT INTO lse_installations VALUES (obj_id, div, sub, obj_type, obj_name, pstcd, rdcd, blkno, vldunt , cenlat, cenlon); Good luck. ---(end of broadcast)--- TIP 6: explain analyze is your friend _ 与联机的朋友进行交流,请使用 Live Messenger; http://get.live.com/messenger/overview ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] How to process inverted comma in "EXECUTE 'insert into xxx values(...)
quote_literal() works.Thanks a lot! From: Richard Huxton <[EMAIL PROTECTED]> To: Nemo Terry <[EMAIL PROTECTED]> CC: pgsql-sql@postgresql.org Subject: Re: [SQL] How to process inverted comma in "EXECUTE 'insert into xxx values(...) Date: Tue, 15 May 2007 08:12:55 +0100 Nemo Terry wrote: > But I must use it in function,so... > Do you have another solution? >>> because obj_name from another table has value like this:S'pore High >>> Polymer. >>> Following error raises: >>> ERROR: syntax error at or near "pore" You'll want to look into the quote_ident() and quote_literal() functions when constructing queries like this. See functions and operators / string functions for details. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: explain analyze is your friend _ 享用世界上最大的电子邮件系统― MSN Hotmail。 http://www.hotmail.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Getting pk of the most recent row, in a group by
Do you have a table of coupon types? Terry Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 Bryce Nesbitt wrote: I've got a table of "coupons" which have an expiration date. For each type of coupon, I'd like to get the primary key of the coupon which will expire first. # create table coupon ( coupon_id serial primary key, type varchar(255), expires date ); insert into coupon values(DEFAULT,'free','2007-01-01'); insert into coupon values(DEFAULT,'free','2007-01-01'); insert into coupon values(DEFAULT,'free','2007-06-01'); insert into coupon values(DEFAULT,'free','2007-06-01'); insert into coupon values(DEFAULT,'50%','2008-06-01'); insert into coupon values(DEFAULT,'50%','2008-06-02'); insert into coupon values(DEFAULT,'50%','2008-06-03'); The desired query would look like: # select coupon_id,type,expires from coupon where type='free' order by expires limit 1; coupon_id | type | expires ---+--+ 1 | free | 2007-01-01 But be grouped by type: # select type,min(expires),count(*) from coupon group by type; type |min | count --++--- free | 2007-01-01 | 4; pk=1 50% | 2008-06-01 | 3; pk=5 In the second example, is it possible to get the primary key of the row with the minimum expires time? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Authentification failed
Connecting as root may be ill advised (doing stuff as root is a bad idea unless one HAS to). All that notwithstanding, you need to setup the correct permissions to allow the connections you want in pg_hba.conf, usually is /var/lib/pgsql/data/pg_hba.conf Terry Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 Bryce Nesbitt wrote: Judith wrote: Hello everybody!! I'm trying in SUSE to connect to a postgres db and this is the error: Ident Authentification failed for user <> I'm already created the user with createuser root, but the error persist, I would aprecciate some help, thanks in advanced ...or just cheat and get in as the postgres root user: psql -U postgres Which on SUSE has no password.
Re: [SQL] outer join issues
Tom Hart wrote: Let me preface this by saying hello SQL list, and I'm an idiot. My SQL knowledge is advanced to the point of being able to use a WHERE clause basically, so I appreciate your business. Now on to my issue I have 3 tables I'm trying to use in this query: loan, share and draft (for those of you not familiar with credit unions, share and draft are savings and checking accounts). What I'm trying to do is get a list of all loans that were charged off (ln_chgoff_dt > 0), and any share and draft accounts that have the same account number. My query looks something like this SELECT ln_acct_num, ln_num, ln_chrgoff_dt, ln_chrgoff_amt, sh_balance, sh_stat_cd, df_balance, df_stat_cd FROM loan LEFT OUTER JOIN share ON loan.ln_acct_num = share.sh_acct_num LEFT OUTER JOIN draft ON loan.ln_acct_num = draft.df_acct_num WHERE ln_chrgoff_dt > 0 AND loan.dataset = 0 AND share.dataset = 0 AND draft.dataset = 0 ; try AND (share.dateset = 0 OR share.dataset IS NULL) AND (draft.dataset = 0 OR draft.dataset IS NULL) because when the left join is utilized, the dateset field will be a null, which is not =0 and hence would fail the AND clause in your version Terry Now the query SELECT * FROM loan WHERE ln_chrgoff_dt > 0 AND loan.dataset = 0 returns 139 rows. Shouldn't the first query return at least that many? My understanding is that a LEFT OUTER JOIN will not drop any records that are only found in the first table, regardless of whether they match records on the second or third table. I end up with 14 results with the first query. I know I'm doing something wrong, but I'm not sure what. Anybody have a helpful kick in the right direction for me? Thanks in advance. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] I need some magical advice
The trick is to do a "GROUP BY" on your identifier (name) and then use a HAVING clause to see if the count is more then 1. NOTE: You likely need a query that does subqueries that use group by considering you want to ignore SOME of the records (ie one per group if that group does not have a status 1 record) but not others (update all in the group if the group has a status 1 record). Hopefully that's enough of a hint, but if not when I get a moment I can spell it out in more detail. NOTE: I recommend running a SELECT first, rather then an UPDATE, so you can see what WOULD be updated and verify your query is going to do what you want before you clobber data. (or use a transaction, but if its a live database you don't want a transaction around locking users out) Terry Terry Fielder te...@greatgulfhomes.com Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 Andreas wrote: Hi, I'd like to update some records in a table. Those have a status_id and among other columns a varchar with a name and a create_date. The status_id is 0 if nothing was done with this record, yet. For some reasons I've got double entries which I now want to flag to -1 so that they can be sorted out without actually deleting them since there are other tables referencing them. From every group that shares the same name all should get status_id set to -1 where status_id = 0. The tricky bit is: How could I provide, that 1 of every group survives, even then when all have status_id = 0? Sometimes 2 of a group are touched so both have to stay. e.g. c_date, status_id, name 2008/01/01, 0, A --> -1 2008/01/02, 1, A --> do nothing 2008/01/03, 0, A --> -1 2008/01/01, 0, B --> do nothing (single entry) 2008/01/01, 0, C --> do nothing (oldest 0 survives) 2008/01/02, 0, C --> -1 2008/01/01, 1, D --> do nothing 2008/01/02, 1, D --> do nothing -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Novice SQL Question
I need to the following query: select distinct event_code, level from logs join stat on (stat.prime is not null) where order_num = 130680; Ok, no problem. Does exactly what I want; however, I need to sort this is a particular way to get the right results. When I try to add the order by clause, I get an error. Here is the ORDER BY: ORDER BY event_date DESC, event_time DESC, event_secs DESC If I put this where I thought it should go as in: select distinct event_code,level from logs join stat on (stat.prime is not null) where order_num = 130680 order by event_date,event_time,event_secs; I get the following error: ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target list No matter where I put it I get errors. Obviously, I'm a novice. Can somebody give me any advice? I'm using Postgres 7.2.3 on RedHat. Thanks... -- Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Novice SQL Question
To answer my own question: I discoverd that the order by fields had to be in the select list. Apparently, this is a requirement when using "DISTINCT". On Monday 02 February 2004 05:38 pm, Terry Lee Tucker wrote: > I need to the following query: > select distinct event_code, level from logs join stat on (stat.prime is not > null) where order_num = 130680; > > Ok, no problem. Does exactly what I want; however, I need to sort this is a > particular way to get the right results. When I try to add the order by > clause, I get an error. Here is the ORDER BY: > ORDER BY event_date DESC, event_time DESC, event_secs DESC > > If I put this where I thought it should go as in: > select distinct event_code,level from logs join stat on (stat.prime is not > null) where order_num = 130680 order by event_date,event_time,event_secs; > > I get the following error: > ERROR: For SELECT DISTINCT, ORDER BY expressions must appear in target > list > > No matter where I put it I get errors. Obviously, I'm a novice. Can > somebody give me any advice? I'm using Postgres 7.2.3 on RedHat. > > Thanks... -- Quote: 48 "Exceeding the bounds of authority is no more a right in a great than in a petty officer, no more justifiable in a king than in a constable; but is so much the worse in him, in that he has more trust put in him, has already a much greater share than the rest of his brethren, and is supposed from the advantages of his education, employment, and coun- sellors, to be more knowing in the measures of right and wrong." --John Locke Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] NULLS and string concatenation
Slick ;o) This goes in my tool kit... On Friday 19 November 2004 03:03 pm, Gregory S. Williamson saith: > Someone on this list provided me with a rather elegant solution to this a > few weeks ago: > > CREATE OR REPLACE FUNCTION text_concat_nulls_with_an_embedded_space(text, > text) RETURNS text AS 'SELECT CASE WHEN $1 IS NULL THEN $2 WHEN $2 IS NULL > THEN $1 ELSE $1 || '' '' || $2 END' LANGUAGE sql; > > CREATE OPERATOR ||~ (PROCEDURE = text_concat_nulls_with_an_embedded_space, > LEFTARG = text, RIGHTARG = text); > > And I call it as: > SELECT (trim(s_directio) ||~ trim(s_house) ||~ trim(s_post_dir) ||~ > trim(s_street) ||~ trim(s_suffix)) as street ... (yadda yadda) > > Deals quite neatly with the NULLs in some of the columns. > > HTH, > > Greg Williamson > DBA > GlobeXplorer LLC > > -Original Message- > From: Michael Fuhr [mailto:[EMAIL PROTECTED] > Sent: Fri 11/19/2004 9:53 AM > To: Don Drake; [EMAIL PROTECTED] > Cc: > Subject: Re: [SQL] NULLS and string concatenation > > On Fri, Nov 19, 2004 at 11:45:43AM -0600, Bruno Wolff III wrote: > > On Fri, Nov 19, 2004 at 11:12:38 -0600, Don Drake <[EMAIL PROTECTED]> wrote: > > > I was able to work around the problem by using COALESCE (and casting > > > variables since it wants the same data types passed to it). > > > > This is what you should do. > > If you don't mind using a non-standard feature, another possibility > would be to create an operator similar to || that COALESCEs NULLs > into empty strings. > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > > > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org -- Quote: 87 "The federal government has taken too much tax money from the people, too much authority from the states, and too much liberty with the Constitution." --Ronald Reagan Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[SQL] Simple Question
Hello: I'm trying to figure out how to convert a floating point value into an interval of time. I'm calculating the time required to drive from point A to point B. For the sake of this question, we'll just say it is miles/speed. So: drv_time = 478 / 45.0; The value of this is: 10.6222 Is there a way of converting this value to an interval. It seems that INTERVAL only works with a quoted literal value. If I type: rnd=# select interval '10.8444 hours'; interval @ 10 hours 50 mins 40 secs (1 row) Anybody have an pointers? Thanks... Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Simple Question
I figured it out. This works: travel_window INTERVAL; drv_time FLOAT; drv_time = miles / 45.0;-- drive time travel_window = quote_literal(drv_time || '' hours''); The variable, travel_window becomes: @ 10 hours 50 mins 40 secs, which is what I wanted. If anybody knows any other ways, I'd be interested in see that too. On Tuesday 11 January 2005 04:42 pm, Terry Lee Tucker saith: > Hello: > > I'm trying to figure out how to convert a floating point value into an > interval of time. I'm calculating the time required to drive from point A > to point B. For the sake of this question, we'll just say it is > miles/speed. So: > > drv_time = 478 / 45.0; > > The value of this is: 10.6222 > > Is there a way of converting this value to an interval. It seems that > INTERVAL only works with a quoted literal value. > > If I type: > rnd=# select interval '10.8444 hours'; > interval > > @ 10 hours 50 mins 40 secs > (1 row) > > Anybody have an pointers? > > Thanks... > > > Work: 1-336-372-6812 > Cell: 1-336-363-4719 > email: [EMAIL PROTECTED] > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Simple Question
Thank you for the reply in spite of the subject. On Tuesday 11 January 2005 05:15 pm, Michael Fuhr saith: > On Tue, Jan 11, 2005 at 04:42:21PM -0500, Terry Lee Tucker wrote: > > Subject: [SQL] Simple Question > > Please use a more descriptive subject -- think about how somebody > looking at a list of 200 messages, all with subjects like "Simple > Question" or "PostgreSQL Question," would decide to look at yours. I will do this in the future. > > > drv_time = 478 / 45.0; > > > > The value of this is: 10.6222 > > > > Is there a way of converting this value to an interval. It seems that > > INTERVAL only works with a quoted literal value. > > You can do arithmetic on intervals: > > SELECT 478 / 45.0 * interval'1 hour'; I like your soultion better than mine. Thanks for the answer. > > For more information, see "Date/Time Functions and Operators" in > the documentation. > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Simple Question
Thanks for the reply. My answer was a little different than yours because I used 488 instead of 478. Well, that three ways so far ;o) On Tuesday 11 January 2005 05:06 pm, Guy Fraser saith: > Convert to seconds first (3600 sec/hr) : > > select ( > '3600'::int4 > * '478'::int4 > / '45.0'::float8 > )::int4::reltime::interval ; > interval > -- > 10:37:20 > (1 row) > > I don't know if "::int4::reltime::interval" is the best > way to end up with an interval, but its the only way I > could figure out how to do it off the top of my head. > > On Tue, 2005-11-01 at 16:42 -0500, Terry Lee Tucker wrote: > > Hello: > > > > I'm trying to figure out how to convert a floating point value into an > > interval of time. I'm calculating the time required to drive from point A > > to point B. For the sake of this question, we'll just say it is > > miles/speed. So: > > > > drv_time = 478 / 45.0; > > > > The value of this is: 10.6222 > > > > Is there a way of converting this value to an interval. It seems that > > INTERVAL only works with a quoted literal value. > > > > If I type: > > rnd=# select interval '10.8444 hours'; > > interval > > > > @ 10 hours 50 mins 40 secs > > (1 row) > > > > Anybody have an pointers? > > > > Thanks... > > > > > > Work: 1-336-372-6812 > > Cell: 1-336-363-4719 > > email: [EMAIL PROTECTED] > > > > ---(end of broadcast)--- > > TIP 4: Don't 'kill -9' the postmaster > > -- > Guy Fraser > Network Administrator > The Internet Centre > 1-888-450-6787 > (780)450-6787 > > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Merry Xmas and a Happy New Year
Merry Christmas to you. On Friday 23 December 2005 09:16 am, Achilleus Mantzios saith: > to All! > > -- > -Achilleus > > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Custom type
On Wednesday 22 March 2006 03:25 pm, Daniel Caune saith: > Hi, > > How can I enter description for my custom types? > > \dT provides information such as schema, name, and description for all > the registered types and custom types. I would like to provide a > description for each custom type I create. > > Thanks, > > > -- > Daniel CAUNE > Ubisoft Online Technology > (514) 4090 2040 ext. 5418 > Daniel, From the \h command in psql: rnd=# \h comment Command: COMMENT Description: define or change the comment of an object Syntax: COMMENT ON { TABLE object_name | COLUMN table_name.column_name | AGGREGATE agg_name (agg_type) | CONSTRAINT constraint_name ON table_name | DATABASE object_name | DOMAIN object_name | FUNCTION func_name (arg1_type, arg2_type, ...) | INDEX object_name | OPERATOR op (leftoperand_type, rightoperand_type) | RULE rule_name ON table_name | SCHEMA object_name | SEQUENCE object_name | TRIGGER trigger_name ON table_name | TYPE object_name | VIEW object_name } IS 'text' I believe this is what you need. HTH. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Query from shell
On Thursday 06 April 2006 02:37 pm, Judith saith: >Hi every body, somebody can show me hot to execute a query from a > shell > > thanks in advanced!!! > > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org Do this: psql -c 'SELECT code FROM cust' rnd; ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] concatenation with a null column (using ||) nulls the result?
On Monday 10 April 2006 05:55 pm, Alvaro Herrera saith: > Neil Harkins wrote: > > Note: The cabinets_description for the "548-4th-Cab1" row is " ", > > not NULL, hence it being displayed. Is this standard SQL behavior? > > Yes; something || NULL yields NULL. If you want NULL to behave as "" > for the purpose of the concatenation, try > > SELECT cabinets_name || ' - ' || COALESCE(cabinets_description, '') AS > concat FROM cabinets WHERE cabinets_datacenters = 2; > > I'm assuming cabinets_name is NOT NULL, so it doesn't need COALESCE. > > -- Good to know. Thanks for the input... ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Simple plpgsql question
On Thursday 13 April 2006 11:38 pm, "Todd Kennedy" <[EMAIL PROTECTED]> thus communicated: --> Hi, --> --> I have, what I hope to be, a simple question about plpgsql. --> --> I have a trigger on a table right now that updates a count everytime --> that a new record is entered into a database (or removed). --> --> What I'd also like to do is have it create a new row in a different --> table using the automatically assigned id as a reference, but I'm --> unsure of how to obtain the id of the newly created row in the first --> table. --> --> Example: --> CREATE TABLE system_info ( --> id serial PRIMARY KEY, --> name varchar(255), --> value varchar(255) --> ); --> INSERT INTO system_info (name,value) VALUES ('total_users','0'); --> --> CREATE TABLE master ( --> id serial PRIMARY KEY, --> name varchar(32) NOT NULL CHECK ( name <> ''), --> UNIQUE(name) --> ); --> --> CREATE TABLE slave ( --> id serial PRIMARY KEY, --> master_id integer REFERENCES master (id), --> additional_info text --> ); --> --> CREATE OR REPLACE FUNCTION update_users() RETURNS trigger AS $$ --> BEGIN --> IF TG_OP = 'DELETE' THEN --> UPDATE system_info SET value=(value::integer)-1 WHERE name = --> 'total_users' --> RETURN OLD; --> ELSEIF TG_OP = 'INSERT' THEN --> UPDATE system_info SET value=(value::integer)+1 WHERE name = --> 'total_users'; --> INSERT INTO slave (master_id) VALUES (THIS IS WHAT I NEED TO KNOW); --> RETURN NEW; --> END IF; --> RETURN NULL; --> END; --> $$ LANGUAGE plpgsql; --> --> CREATE TRIGGER update_users AFTER INSERT OR DELETE ON master --> FOR EACH ROW EXECUTE PROCEDURE update_users(); --> --> --> The part I need to know is the INSERT INTO statement in the procedure. --> --> Any help would be great. --> --> Thanks! --> --> ---(end of broadcast)--- --> TIP 4: Have you searched our list archives? --> -->http://archives.postgresql.org --> INSERT INTO slave (master_id) VALUES (new.id); The buffer NEW contains all the "new" data. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] problem with uniques and foreing keys
On Saturday 06 May 2006 01:27 pm, "kernel.alert kernel.alert" <[EMAIL PROTECTED]> thus communicated: --> Hi list... --> --> Please i have a problem with this... --> --> I create the follow tables... --> --> --> --> CREATE TABLE empresa ( --> id_empresa integer NOT NULL primary key, --> nombre varchar(45), --> ); --> CREATE TABLE casino ( --> id_casino integer NOT NULL, --> id_empresa integer REFERENCES empresa(id_empresa), --> --> nombre varchar(45), --> --> primary key(id_casino,id_empresa) --> ); --> CREATE TABLE maq_casino ( --> id_empresa integer NOT NULL REFERENCES casino(id_empresa), --> id_casino integer NOT NULL REFERENCES casino(id_casino), --> --> ubicacion_sala varchar(45) default NULL, --> primary key(id_empresa,id_casino,id_tipo_maquina,id_maq_casino) --> ); --> --> --> --> When i'm gonna to create the last table i got this error: --> --> ERROR: no hay restriccion unique que coincida con las columnas dadas en la --> tabla referida <> --> --> That in english is like .. there is no a unique constraint with columns --> referred in casino table. --> --> --> Please where is the problem... --> --> Greetings ... --> --> --> -- The columns referenced in the maq_casino table must have UNIQUE constraints on them in their table definition as in: id_casino integer UNIQUE NOT NULL, ^^^ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] About sequences that works BAD !!!!
On Wednesday 14 June 2006 02:02 pm, "Alexis Palma Espinosa" <[EMAIL PROTECTED]> thus communicated: --> Hello everyone: --> --> --> --> We are working with serials fields and we found a problem with then: When we insert in a table that has e unique restrict, and this makes insert fails, the sequence increments anyway...¿What we can do about it? --> --> --> --> We hope you can help us. --> --> --> This is doing exactly what it is supposed to do. See the docs: http://www.postgresql.org/docs/7.4/static/functions-sequence.html ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] First day of month, last day of month
On Thursday 24 April 2008 10:47, Bart Degryse wrote: > Well, that's what it does afaikt. And what does afaikt mean? -- Terry Lee Tucker Turbo's IT Manager Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30501 Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987 [EMAIL PROTECTED] www.turbocorp.com
Re: [SQL] Case Insensitive searches
On Monday 04 August 2008 10:05, Richard Broersma wrote: > On Mon, Aug 4, 2008 at 6:54 AM, Mike Gould <[EMAIL PROTECTED]> wrote: > >In some db's if you > > use a lower() or upr() it will always do a table scan instead of using a > > index > > True, this would also happen in PostgreSQL. However, you can overcome > this by creating a "functional" index: > > http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html > > This way all expression using where lower( column ) = 'a'. will always > use an index scan. > > > -- What about using the operator, ~* ? Does that cause a table scan as well? -- Terry Lee Tucker Turbo's IT Manager Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30501 Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987 [EMAIL PROTECTED] www.turbocorp.com -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Case Insensitive searches
On Monday 04 August 2008 11:09, Frank Bax wrote: > Terry Lee Tucker wrote: > > On Monday 04 August 2008 10:05, Richard Broersma wrote: > >> On Mon, Aug 4, 2008 at 6:54 AM, Mike Gould <[EMAIL PROTECTED]> wrote: > >>> In some db's if you > >>> use a lower() or upr() it will always do a table scan instead of using > >>> a index > >> > >> True, this would also happen in PostgreSQL. However, you can overcome > >> this by creating a "functional" index: > >> > >> http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html > >> > >> This way all expression using where lower( column ) = 'a'. will always > >> use an index scan. > > > > What about using the operator, ~* ? > > > > Does that cause a table scan as well? > > Whether or not any query uses an index scan or seq scan depends on many > factors and is not always easily predictable. > > Richard's statement about "will always use an index scan" is not > universally true. If the table is very small; a index scan is NOT used. > Table statistics could also indicate a seq scan is more efficient > (suppose 99% of rows had column='a'). > > The ~* operator is very likely to scan the entire table because it will > look for 'A' anywhere in the column (and will therefore match 'Joanne'; > and I doubt that there is special code to handle case where length of > argument is exactly the same as column. However; ~* '^a' which anchors > search to first character is perhaps more likely to use an index scan. > > Frank Frank, Thanks for the response. Actually, from within the applicaion, we use ~* and it is anchored with whatever they've typed in the widget as search criteria. Anyway, thanks for the helpful response... -- Terry Lee Tucker Turbo's IT Manager Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30501 Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987 [EMAIL PROTECTED] www.turbocorp.com -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Case Insensitive searches
On Wednesday 06 August 2008 07:59, Rafael Domiciano wrote: > I have read the article... tnks, very helpful. > > But, can I create a index using function like "substring"? I would like to > create something like this: Actually, Richard Broersma is the one who commented on that approach. I have never done this but I have read about it. I'm sure it can be done. > > CREATE INDEX indtest_01 ON table_01 > ((SUBSTRING(month_year, 3, 4) || SUBSTRING(month_year, 1, 2)) > > 2008/8/4 Terry Lee Tucker <[EMAIL PROTECTED]> > > > On Monday 04 August 2008 11:09, Frank Bax wrote: > > > Terry Lee Tucker wrote: > > > > On Monday 04 August 2008 10:05, Richard Broersma wrote: > > > >> On Mon, Aug 4, 2008 at 6:54 AM, Mike Gould <[EMAIL PROTECTED]> > > > > wrote: > > > >>> In some db's if you > > > >>> use a lower() or upr() it will always do a table scan instead of > > > > using > > > > > >>> a index > > > >> > > > >> True, this would also happen in PostgreSQL. However, you can > > > >> overcome this by creating a "functional" index: > > > > http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html > > > > > >> This way all expression using where lower( column ) = 'a'. will > > > >> always use an index scan. > > > > > > > > What about using the operator, ~* ? > > > > > > > > Does that cause a table scan as well? > > > > > > Whether or not any query uses an index scan or seq scan depends on many > > > factors and is not always easily predictable. > > > > > > Richard's statement about "will always use an index scan" is not > > > universally true. If the table is very small; a index scan is NOT > > > used. Table statistics could also indicate a seq scan is more efficient > > > (suppose 99% of rows had column='a'). > > > > > > The ~* operator is very likely to scan the entire table because it will > > > look for 'A' anywhere in the column (and will therefore match 'Joanne'; > > > and I doubt that there is special code to handle case where length of > > > argument is exactly the same as column. However; ~* '^a' which anchors > > > search to first character is perhaps more likely to use an index scan. > > > > > > Frank > > > > Frank, > > > > Thanks for the response. Actually, from within the applicaion, we use ~* > > and > > it is anchored with whatever they've typed in the widget as search > > criteria. > > > > Anyway, thanks for the helpful response... > > -- > > Terry Lee Tucker > > Turbo's IT Manager > > Turbo, division of Ozburn-Hessey Logistics > > 2251 Jesse Jewell Pkwy NE > > Gainesville, GA 30501 > > Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987 > > [EMAIL PROTECTED] > > www.turbocorp.com > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql -- Terry Lee Tucker Turbo's IT Manager Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30501 Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987 [EMAIL PROTECTED] www.turbocorp.com -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql