Re: [SQL] metaphone and nysiis in postgres
Demel, Jeff wrote: Can this be installed easily on Windows? Try re-running the installer, it should let you tick various options to install from contrib. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Seeking quick way to clone a row, but give it a new pk.
On Thursday 08 February 2007 09:19, Bryce Nesbitt wrote: > > > > INSERT INTO mytable SELECT * FROM mytable WHERE pk = 123; > > > > Or something close to that... I suspect if you changed the '*' to the > > columns you wanted you could also work in the other columns you want > > to change as well... > > But that will violate the unique primary key constraint: > > insert into xx_plan_rule select * from xx_plan_rule where rule_id=9; > ERROR: duplicate key violates unique constraint "xx_plan_rule_pkey" It will, because you are copying all columns, including the pk. Try: INSERT INTO mytable (colname_1, colname_2, colname_3) SELECT (colname_1, colname_2, colname_3) FROM mytable WHERE pk = 123; BR, -- Aarni Ruuhimäki ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] interval as hours or minutes ?
On Thursday 08 February 2007 00:09, you wrote: > select extract(epoch from (timestamp '2007-02-07 16:24:00' - timestamp > '2007-02-05 13:00:01'))/60 as minutes; > > minutes > -- > 3083.983 > (1 row) > > select round(extract(epoch from (timestamp '2007-02-07 16:24:00' - > timestamp '2007-02-05 13:00:01'))/60) as minutes; > > minutes > - > 3084 > (1 row) Hi Guys, Charming ! Furher still, I would only want full minutes. select extract(epoch from (SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND stop_date_time <= '2007-02-28')/60) as mins; mins - 3728.73 (1 row) select convert_interval((SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND stop_date_time <= '2007-02-28'),'minutes') as minutes; minutes -- 3728.733 minutes (1 row) select round(extract(epoch from (SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10))/60) as mins; mins -- 3729 (1 row) So instead of rounding up to 3729 the result would have to be 'stripped' to 3728 ? Thanks, -- Aarni Ruuhimäki ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] interval as hours or minutes ?
Use trunc instead of round. Also take a look at ceil and floor functions >>> Aarni Ruuhimäki <[EMAIL PROTECTED]> 2007-02-08 11:01 >>> On Thursday 08 February 2007 00:09, you wrote: > select extract(epoch from (timestamp '2007-02-07 16:24:00' - timestamp > '2007-02-05 13:00:01'))/60 as minutes; > > minutes > -- > 3083.983 > (1 row) > > select round(extract(epoch from (timestamp '2007-02-07 16:24:00' - > timestamp '2007-02-05 13:00:01'))/60) as minutes; > > minutes > - > 3084 > (1 row) Hi Guys, Charming ! Furher still, I would only want full minutes. select extract(epoch from (SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND stop_date_time <= '2007-02-28')/60) as mins; mins - 3728.73 (1 row) select convert_interval((SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND stop_date_time <= '2007-02-28'),'minutes') as minutes; minutes -- 3728.733 minutes (1 row) select round(extract(epoch from (SELECT SUM(stop_date_time - start_date_time) FROM work_times WHERE user_id = 10))/60) as mins; mins -- 3729 (1 row) So instead of rounding up to 3729 the result would have to be 'stripped' to 3728 ? Thanks, -- Aarni Ruuhimäki ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] interval as hours or minutes ?
Ahh, Forgot about trunc() in the midst of all this ... Thank you guys again ! Aarni On Thursday 08 February 2007 12:06, Bart Degryse wrote: > Use trunc instead of round. > Also take a look at ceil and floor functions > > >>> Aarni Ruuhimäki <[EMAIL PROTECTED]> 2007-02-08 11:01 >>> > > On Thursday 08 February 2007 00:09, you wrote: > > select extract(epoch from (timestamp '2007-02-07 16:24:00' - > > timestamp > > > '2007-02-05 13:00:01'))/60 as minutes; > > > > minutes > > -- > > 3083.983 > > (1 row) > > > > select round(extract(epoch from (timestamp '2007-02-07 16:24:00' - > > timestamp '2007-02-05 13:00:01'))/60) as minutes; > > > > minutes > > - > > 3084 > > (1 row) > > Hi Guys, > > Charming ! > > Furher still, I would only want full minutes. > > select extract(epoch from (SELECT SUM(stop_date_time - start_date_time) > FROM > work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND > stop_date_time <= '2007-02-28')/60) as mins; > mins > - > 3728.73 > (1 row) > > select convert_interval((SELECT SUM(stop_date_time - start_date_time) > FROM > work_times WHERE user_id = 10 AND start_date_time >= '2007-01-01' AND > stop_date_time <= '2007-02-28'),'minutes') as minutes; > minutes > -- > 3728.733 minutes > (1 row) > > select round(extract(epoch from (SELECT SUM(stop_date_time - > start_date_time) > FROM work_times WHERE user_id = 10))/60) as mins; > mins > -- > 3729 > (1 row) > > So instead of rounding up to 3729 the result would have to be > 'stripped' to > 3728 ? > > Thanks, ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
On Wed, Feb 07, 2007 at 02:43:13PM -0800, Greg Wittel wrote: > I'm having a strange problem with a PL/PGSQL query that executes some > dynamic SQL code. The code basically creates a dynamically named table, > some indexes, etc. > > The problem seems to be the an index expression. If I remove it and do a > plain index on the column, all works correctly. If I keep it, I get a > "relation does not exist" error. The error appears to happen for anything that uses SPI. A C function that executes the following fails with the same error: SPI_exec("CREATE TABLE foo (t text); CREATE INDEX foo_idx ON foo (lower(t))", 0); In 8.2.3 the error location is: LOCATION: RangeVarGetRelid, namespace.c:200 -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] unsubscribe
unsubscribe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] unsubscribe
Unsubscribe Susan Evans Haywood County Schools NCWISE Coordinator 216 Charles Street Clyde, NC 28721 828-627-8314 (Phone) 828-627-8277 (Fax) 216 Charles Street Clyde, NC 28721 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
Michael Fuhr wrote: > On Wed, Feb 07, 2007 at 02:43:13PM -0800, Greg Wittel wrote: > > I'm having a strange problem with a PL/PGSQL query that executes some > > dynamic SQL code. The code basically creates a dynamically named table, > > some indexes, etc. > > > > The problem seems to be the an index expression. If I remove it and do a > > plain index on the column, all works correctly. If I keep it, I get a > > "relation does not exist" error. > > The error appears to happen for anything that uses SPI. A C function > that executes the following fails with the same error: > > SPI_exec("CREATE TABLE foo (t text); CREATE INDEX foo_idx ON foo (lower(t))", > 0); Hmm, are we short of a CommandCounterIncrement in the middle of both commands? Does the same error show up if you do SPI_exec("CREATE TABLE foo (t text);", 0); SPI_exec("CREATE INDEX foo_idx ON foo (lower(t))", 0); ? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
On Thu, Feb 08, 2007 at 11:14:33AM -0300, Alvaro Herrera wrote: > Michael Fuhr wrote: > > The error appears to happen for anything that uses SPI. A C function > > that executes the following fails with the same error: > > > > SPI_exec("CREATE TABLE foo (t text); CREATE INDEX foo_idx ON foo > > (lower(t))", 0); > > Hmm, are we short of a CommandCounterIncrement in the middle of both > commands? Does the same error show up if you do > > SPI_exec("CREATE TABLE foo (t text);", 0); > SPI_exec("CREATE INDEX foo_idx ON foo (lower(t))", 0); Using separate calls to SPI_exec() works. Using a single call to SPI_exec() works if the index is on (t) instead of (lower(t)): SPI_exec("CREATE TABLE foo (t text); CREATE INDEX foo_idx ON foo (t)", 0); -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] unsubscribe
unsubscribe
[SQL] unsubscribe
unsubscribe -- Adrien LEBRE Projet PARIS / XtreemOS IRISA, Rennes, France +33(0)2 99 84 22 39 http://www.xtreemos.org ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
Michael Fuhr <[EMAIL PROTECTED]> writes: > Using separate calls to SPI_exec() works. Using a single call to > SPI_exec() works if the index is on (t) instead of (lower(t)): > SPI_exec("CREATE TABLE foo (t text); CREATE INDEX foo_idx ON foo (t)", 0); It's only by chance that it works in that case: the current coding of transformIndexStmt happens not to try to touch the underlying table if there aren't any expressions to analyze. You can make a large number of variants that will fail, eg, create the table and try to insert into it in one command string. My advice is not to try to execute multiple commands in the same EXECUTE string --- if we were going to do anything to "fix" this, I think it would be along the lines of enforcing that advice. Trying to make the world safe for it doesn't sound productive. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
On Thu, Feb 08, 2007 at 10:32:25AM -0500, Tom Lane wrote: > My advice is not to try to execute multiple commands in the same EXECUTE > string --- if we were going to do anything to "fix" this, I think it > would be along the lines of enforcing that advice. Trying to make the > world safe for it doesn't sound productive. The SPI_execute() documentation does mention that multiple commands are allowed: http://www.postgresql.org/docs/8.2/interactive/spi-spi-execute.html "You may pass multiple commands in one string. SPI_execute returns the result for the command executed last. The count limit applies to each command separately, but it is not applied to hidden commands generated by rules. "When read_only is false, SPI_execute increments the command counter and computes a new snapshot before executing each command in the string." Should that documentation be modified? -- Michael Fuhr ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Odd PL/PgSQL Error -- relation "X" does not exist when using index expression
Michael Fuhr <[EMAIL PROTECTED]> writes: > On Thu, Feb 08, 2007 at 10:32:25AM -0500, Tom Lane wrote: >> My advice is not to try to execute multiple commands in the same EXECUTE >> string --- if we were going to do anything to "fix" this, I think it >> would be along the lines of enforcing that advice. Trying to make the >> world safe for it doesn't sound productive. > The SPI_execute() documentation does mention that multiple commands > are allowed: Well, the point here is that there's one pass of parsing and one of execution, and you won't get far if the parsing pass requires an earlier command to have already been executed. So maybe the appropriate warning is something about not using interdependent DDL commands. Feel free to draft up a docs patch. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Seeking quick way to clone a row, but give it a new pk.
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 > I need to create some nearly identical copies of rows in > a complicated table. > > Is there a handy syntax that would let me copy a existing row, > but get a new primary key for the copy? http://people.planetpostgresql.org/greg/index.php?/archives/45-Making-a-copy-of-a-unique-row.html - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 200702081114 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iD8DBQFFy0xcvJuQZxSWSsgRA8vFAJsHMFhngWGCSSi8okO9j9H0++hajgCgrUz+ aKTnhaHuQHv1qetAmPt/ufM= =HDF8 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Open a Transaction
Hi list, Could someone tell me what is wrong on this statement ? Start Transaction delete from base.something where id in( 41503, 41504, 41505, 41506, 41507, 41508, 41509, 41510, 41511, 41512, 41513, 41514, 41515, 41516, 41517, 41518, 41519, 41520, 41521, 41522, 41523, 41524, 41525, 41526, 41527, 41528, ) end; Commit; -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Atenciosamente (Sincerely) Ezequias Rodrigues da Rocha =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- A pior das democracias ainda é melhor do que a melhor das ditaduras The worst of democracies is still better than the better of dictatorships http://ezequiasrocha.blogspot.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Open a Transaction
Hi Ezequias, On Thu, 2007-02-08 at 14:50 -0200, Ezequias Rodrigues da Rocha wrote: > Hi list, > > Could someone tell me what is wrong on this statement ? > > Start Transaction The above should read begin; > delete from base.something > where > id in( > 41503, > 41504, > 41505, > 41506, > 41507, > 41508, > 41509, > 41510, > 41511, > 41512, > 41513, > 41514, > 41515, > 41516, > 41517, > 41518, > 41519, > 41520, > 41521, > 41522, > 41523, > 41524, > 41525, > 41526, > 41527, > 41528, Remove the extra comma. > ) > end; Remove 'end'. > Commit; Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] metaphone and nysiis in postgres
I don't have a problem with doing this in our development environment, but when we go to move it onto the live production box, with all the live data and current custom settings, will a re-install cause issues? Data loss is unacceptable, of course, but it would also be nice if we don't have to juggle users, permissions, and settings. -Jeff -Original Message- From: Richard Huxton [mailto:[EMAIL PROTECTED] Sent: Thursday, February 08, 2007 3:06 AM To: Demel, Jeff Cc: pgsql-sql@postgresql.org Subject: Re: [SQL] metaphone and nysiis in postgres Demel, Jeff wrote: > Can this be installed easily on Windows? Try re-running the installer, it should let you tick various options to install from contrib. -- Richard Huxton Archonet Ltd This email is intended only for the individual or entity to which it is addressed. This email may contain information that is privileged, confidential or otherwise protected from disclosure. Dissemination, distribution or copying of this e-mail or any attachments by anyone other than the intended recipient, or an employee or agent responsible for delivering the message to the intended recipient, is prohibited. If you are not the intended recipient of this message or the employee or agent responsible for delivery of this email to the intended recipient, please notify the sender by replying to this message and then delete it from your system. Any use, dissemination, distribution, or reproduction of this message by unintended recipients is strictly prohibited and may be unlawful. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Open a Transaction
Now the sql is OK but now I have the following error: ERROR: stack depth limit exceeded SQL state: 54001 Hint: Increase the configuration parameter "max_stack_depth". In the previous e-mail I hide the numbers of itens of my set (in(234,12332,1232,) actually I have more than 36000 subsets of my IN statement. :\ Any suggestion instead of change my max_stack_depth ? Regards Ezequias 2007/2/8, Joe <[EMAIL PROTECTED] >: Hi Ezequias, On Thu, 2007-02-08 at 14:50 -0200, Ezequias Rodrigues da Rocha wrote: > Hi list, > > Could someone tell me what is wrong on this statement ? > > Start Transaction The above should read begin; > delete from base.something > where > id in( > 41503, > 41504, > 41505, > 41506, > 41507, > 41508, > 41509, > 41510, > 41511, > 41512, > 41513, > 41514, > 41515, > 41516, > 41517, > 41518, > 41519, > 41520, > 41521, > 41522, > 41523, > 41524, > 41525, > 41526, > 41527, > 41528, Remove the extra comma. > ) > end; Remove 'end'. > Commit; Joe -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Atenciosamente (Sincerely) Ezequias Rodrigues da Rocha =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- A pior das democracias ainda é melhor do que a melhor das ditaduras The worst of democracies is still better than the better of dictatorships http://ezequiasrocha.blogspot.com/
Re: [SQL] Open a Transaction
At 17.50 08/02/2007, Ezequias Rodrigues da Rocha wrote: >Hi list, > >Could someone tell me what is wrong on this statement ? > >Start Transaction >delete from base.something >where >id in( >41503, >41504, >41505, >41506, >41507, >41508, >41509, >41510, >41511, >41512, >41513, >41514, >41515, >41516, >41517, >41518, >41519, >41520, >41521, >41522, >41523, >41524, >41525, >41526, >41527, >41528, ^ this comma I guess ;-)! >) >end; >Commit; > >-- > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Atenciosamente (Sincerely) >Ezequias Rodrigues da Rocha > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- >A pior das democracias ainda é melhor do que a melhor das ditaduras >The worst of democracies is still better than the better of dictatorships >http://ezequiasrocha.blogspot.com/ > >---(end of broadcast)--- >TIP 6: explain analyze is your friend > Roberto Fichera. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Open a Transaction
On Thu, Feb 08, 2007 at 03:13:16PM -0200, Ezequias Rodrigues da Rocha wrote: > > Any suggestion instead of change my max_stack_depth ? Well, I suppose you could put the numbers in a temp table an NOT IN on that. A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Open a Transaction
"Ezequias Rodrigues da Rocha" <[EMAIL PROTECTED]> writes: > Now the sql is OK but now I have the following error: > ERROR: stack depth limit exceeded > SQL state: 54001 > Hint: Increase the configuration parameter "max_stack_depth". > In the previous e-mail I hide the numbers of itens of my set > (in(234,12332,1232,) actually I have more than 36000 subsets of my IN > statement. That's probably well past the point at which you should expect IN (list) to give reasonable performance. Instead consider putting the values into a temp table and writing a join or IN (subselect) against the temp table. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Open a Transaction
I increase the "max_stack_depth" and the In statemen run OK. I don't know if it is good to put this variable as big as possible or as the manual report (using ulimit -s) to put the larger stack capacity of Operational System. Ezequias 2007/2/8, Andrew Sullivan <[EMAIL PROTECTED]>: On Thu, Feb 08, 2007 at 03:13:16PM -0200, Ezequias Rodrigues da Rocha wrote: > > Any suggestion instead of change my max_stack_depth ? Well, I suppose you could put the numbers in a temp table an NOT IN on that. A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Atenciosamente (Sincerely) Ezequias Rodrigues da Rocha =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- A pior das democracias ainda é melhor do que a melhor das ditaduras The worst of democracies is still better than the better of dictatorships http://ezequiasrocha.blogspot.com/
[SQL] COPY FROM - force a value
Is there a way to force a value when you're doing a COPY FROM, importing a file into a table? Here's my query as it is now: COPY filetable (value1, value2, value3, value4, forcevalue1, forcevalue2) FROM 'C:\\InsertFiles\\thisfile.txt' WITH DELIMITER AS ' ' ; The file only contains data for values 1 through 4. I'd like to insert values for the last two fields. This is what I had in mind, which doesn't work: COPY filetable (value1, value2, value3, value4, forcevalue1, forcevalue2) FROM 'C:\\InsertFiles\\thisfile.txt' WITH DELIMITER AS ' ', forcevalue1 = 1, forcevalue2 = 'this value' ; TIA -Jeff This email is intended only for the individual or entity to which it is addressed. This email may contain information that is privileged, confidential or otherwise protected from disclosure. Dissemination, distribution or copying of this e-mail or any attachments by anyone other than the intended recipient, or an employee or agent responsible for delivering the message to the intended recipient, is prohibited. If you are not the intended recipient of this message or the employee or agent responsible for delivery of this email to the intended recipient, please notify the sender by replying to this message and then delete it from your system. Any use, dissemination, distribution, or reproduction of this message by unintended recipients is strictly prohibited and may be unlawful. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] unsubscribe
unsubscribe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] COPY FROM - force a value
Demel, Jeff wrote: > Is there a way to force a value when you're doing a COPY FROM, importing > a file into a table? > > Here's my query as it is now: > > COPY filetable (value1, value2, value3, value4, forcevalue1, > forcevalue2) > FROM 'C:\\InsertFiles\\thisfile.txt' > WITH DELIMITER AS ' ' > ; > > The file only contains data for values 1 through 4. I'd like to insert > values for the last two fields. This is what I had in mind, which > doesn't work: > > COPY filetable (value1, value2, value3, value4, forcevalue1, > forcevalue2) > FROM 'C:\\InsertFiles\\thisfile.txt' > WITH DELIMITER AS ' ', > forcevalue1 = 1, > forcevalue2 = 'this value' > ; I'd try setting a DEFAULT for those two columns using ALTER TABLE, then the COPY FROM call excluding those columns, then removing the DEFAULT. If you do it in a transaction block, no other transaction can be molested by the default values, though they will be blocked of the table during that transaction. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Differentiate Between Zero-Length String and NULLColumn Values
On 1/30/2007 3:17 PM, Jamie A Lawrence wrote: Just a datapoint: SQL*Plus: Release 10.1.0.3.0 - Production on Tue Jan 30 15:15:49 2007 Copyright (c) 1982, 2004, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production With the Partitioning, OLAP and Data Mining options It is well known that Oracle's handling of zero length strings violates all ANSI SQL Standards, so what exactly is your point? Jan SQL> select * from dual where '' IS NULL; D - X SQL> select * from dual where '' = NULL; no rows selected -j ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] COPY FROM - force a value
That works like a charm. Thanks, Alvaro! -Jeff -Original Message- From: Alvaro Herrera [mailto:[EMAIL PROTECTED] Sent: Thursday, February 08, 2007 1:22 PM To: Demel, Jeff Cc: pgsql-sql@postgresql.org Subject: Re: [SQL] COPY FROM - force a value Demel, Jeff wrote: > Is there a way to force a value when you're doing a COPY FROM, > importing a file into a table? > > Here's my query as it is now: > > COPY filetable (value1, value2, value3, value4, forcevalue1, > forcevalue2) > FROM 'C:\\InsertFiles\\thisfile.txt' > WITH DELIMITER AS ' ' > ; > > The file only contains data for values 1 through 4. I'd like to > insert values for the last two fields. This is what I had in mind, > which doesn't work: > > COPY filetable (value1, value2, value3, value4, forcevalue1, > forcevalue2) > FROM 'C:\\InsertFiles\\thisfile.txt' > WITH DELIMITER AS ' ', > forcevalue1 = 1, > forcevalue2 = 'this value' > ; I'd try setting a DEFAULT for those two columns using ALTER TABLE, then the COPY FROM call excluding those columns, then removing the DEFAULT. If you do it in a transaction block, no other transaction can be molested by the default values, though they will be blocked of the table during that transaction. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. This email is intended only for the individual or entity to which it is addressed. This email may contain information that is privileged, confidential or otherwise protected from disclosure. Dissemination, distribution or copying of this e-mail or any attachments by anyone other than the intended recipient, or an employee or agent responsible for delivering the message to the intended recipient, is prohibited. If you are not the intended recipient of this message or the employee or agent responsible for delivery of this email to the intended recipient, please notify the sender by replying to this message and then delete it from your system. Any use, dissemination, distribution, or reproduction of this message by unintended recipients is strictly prohibited and may be unlawful. ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] metaphone and nysiis in postgres
On 8 Feb 2007 at 10:59, Demel, Jeff wrote: > I don't have a problem with doing this in our development environment, > but when we go to move it onto the live production box, with all the > live data and current custom settings, will a re-install cause issues? > Data loss is unacceptable, of course, but it would also be nice if we > don't have to juggle users, permissions, and settings. > > -Jeff Try looking in the share\contrib subdirectory of the install folder. Should be SQL files there to install/uninstall the contrib modules. -jan > > > -Original Message- > From: Richard Huxton [mailto:[EMAIL PROTECTED] > Sent: Thursday, February 08, 2007 3:06 AM > To: Demel, Jeff > Cc: pgsql-sql@postgresql.org > Subject: Re: [SQL] metaphone and nysiis in postgres > > Demel, Jeff wrote: > > Can this be installed easily on Windows? > > Try re-running the installer, it should let you tick various options to > install from contrib. > > -- >Richard Huxton >Archonet Ltd > This email is intended only for the individual or entity to which it is > addressed. This email may contain information that is privileged, > confidential or otherwise protected from disclosure. Dissemination, > distribution or copying of this e-mail or any attachments by anyone other > than the intended recipient, or an employee or agent responsible for > delivering the message to the intended recipient, is prohibited. If you are > not the intended recipient of this message or the employee or agent > responsible for delivery of this email to the intended recipient, please > notify the sender by replying to this message and then delete it from your > system. Any use, dissemination, distribution, or reproduction of this > message by unintended recipients is strictly prohibited and may be unlawful. > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] huge disparities in =/IN/BETWEEN performance
all my SQL-writin' life i have been assuming that expressions like =, IN, BETWEEN in the WHERE clause are, in the most general sense, alternative ways of doing the same things. i am hitting some very very bizarre results in PGSQL: i have a (very involved) view, say v_foo, largely optimized to be queried by a user_id column, so: select * from v_foo where user_id = 70728; Time: 580.620 ms however an essentially synonymous IN construct takes minutes to complete (the subquery inside the IN clause will return the 70728 ID and by itself takes 40ms to complete): select * from v_foo where user_id in (select user_id from bar group by 1 having count(*) = 10 limit 1); Time: 244616.464 ms a synonymous-looking BETWEEN also takes forever: select * from v_foo where user_id between 70728 and 70728; Time: 329332.722 ms there is, admittedly, substantial complexity inside v_foo, with GROUP BYs on user_id and various subqueries, but my basic thought is that should not really matter... i am on 8.1.3. i'd like to hope that the 8.2 optimizer improvements might help with this but i haven't tested. george ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Seeking quick way to clone a row, but give it a new pk.
A pl/pgsql function can do this easily. Something like this (not tested): create or replace function dup_my_table(old_key text, new_key text) returns text as $$ declare rec my_table; begin; select into rec * from my_table where key_field = old_key; rec.key_field = new_key; insert into my_table values (rec.*); return new_key; end; $$ language plpgsql; On Feb 7, 2007, at 4:21 PM, Bryce Nesbitt wrote: I need to create some nearly identical copies of rows in a complicated table. Is there a handy syntax that would let me copy a existing row, but get a new primary key for the copy? I'd then go in an edit the 1 or 2 additional columns that differ. The duplicate would be in the same table as the original. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] huge disparities in =/IN/BETWEEN performance
"George Pavlov" <[EMAIL PROTECTED]> writes: > there is, admittedly, substantial complexity inside v_foo, with GROUP > BYs on user_id and various subqueries, but my basic thought is that > should not really matter... You're unlikely to get any useful comment on this when you have not shown any of those details, nor even an EXPLAIN. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] huge disparities in =/IN/BETWEEN performance
> > BYs on user_id and various subqueries, but my basic thought is that > > should not really matter... > > You're unlikely to get any useful comment on this when you have not > shown any of those details, nor even an EXPLAIN. yes, i know. i guess i was partially just venting. sorry. the problem is that the view is very complex and cleansing it for general consumprion and paring it down to some degree of readability is a lot of work. the basic question i have is fairly clear though: why saying "where x = 10" should be different (in ANY cicumstance, not just mine) from saying "where x between 10 and 10" or from "where x in (select ... /* some query that returns 10 */)" ??? i am not really looking for help optimizing my view and query, more of a general idea of should this happen, when might this happen, why is it a good idea that this happens? is this a better statement of the issue? thanks for listening! george ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] huge disparities in =/IN/BETWEEN performance
George Pavlov wrote: > the basic question i have is fairly clear though: why saying "where x = > 10" should be different (in ANY cicumstance, not just mine) from saying > "where x between 10 and 10" or from "where x in (select ... /* some > query that returns 10 */)" ??? I think the principle here is that the system is not gonna waste cycles on dumb queries. Supposedly, morphing "foo BETWEEN 10 and 10" into "foo=10" is not a trivial transformation, and it'd impose a planning cost on all non-dumb BETWEEN queries. That cost is best avoided: if you optimize for dumb users, the smart users then want you buried because you've lost performance doing useless work. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Alvaro Herrera <[EMAIL PROTECTED]> writes: > I think the principle here is that the system is not gonna waste cycles > on dumb queries. Supposedly, morphing "foo BETWEEN 10 and 10" into > "foo=10" is not a trivial transformation, and it'd impose a planning > cost on all non-dumb BETWEEN queries. There's a datatype abstraction issue involved: what does it take to prove that "x >= 10 AND x <= 10" is equivalent to "x = 10"? This requires a nontrivial amount of knowledge about the operators involved. We could probably do it for operators appearing in a btree operator class, but as Alvaro says, it'd be cycles wasted for non-dumb queries. As for the IN case, I think we do simplify "x IN (one-expression)" to "x = one-expression", but "x IN (sub-select)" is a whole 'nother matter, especially when you're comparing it to a case where one-expression is a constant and so the planner can get good statistics about how many rows are likely to match. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Hi Tom, On Thu, 2007-02-08 at 22:50 -0500, Tom Lane wrote: > There's a datatype abstraction issue involved: what does it take to > prove that "x >= 10 AND x <= 10" is equivalent to "x = 10"? This > requires a nontrivial amount of knowledge about the operators involved. > We could probably do it for operators appearing in a btree operator > class, but as Alvaro says, it'd be cycles wasted for non-dumb queries. Are you saying the planner is datatype-agnostic and can't tell that x is, say, as in the example above, an INTEGER and therefore cannot transform one expression into another? What about "x = 10 AND x < 5"? Can't it reduce that to FALSE? Joe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Joe <[EMAIL PROTECTED]> writes: > Are you saying the planner is datatype-agnostic Certainly, but your other concerns don't follow from that. The issue at hand here is whether it's worth expending cycles on every query to try to detect a situation that only holds for a few. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] unsubscribe
unsubscribe We won't tell. Get more on shows you hate to love (and love to hate): Yahoo! TV's Guilty Pleasures list. http://tv.yahoo.com/collections/265 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] huge disparities in =/IN/BETWEEN performance
Hi Tom, On Thu, 2007-02-08 at 23:24 -0500, Tom Lane wrote: > Certainly, but your other concerns don't follow from that. The issue at > hand here is whether it's worth expending cycles on every query to try > to detect a situation that only holds for a few. Those where George's concerns, but I was interested in knowing whether the planner transforms a query in any way to avoid, let's say, useless execution. George didn't provide the inside of his view, but it's possible that my earlier example could be rephrased as follows: create view v_foo as select * from tab where x < 5; select * from v_foo where x = 10; Presumably the planner has to transform the SELECT into select * from tab where x < 5 and x = 10; or something analogous. This is a simple example, but with complicated views or even joins and aggregates, the "useless execution" may not be that obvious to the "naked eye" but it would be to a boolean logic analyzer. As to whether these query instances represent few or are typical is arguable, and will depend on the type of application, level of knowledge among users, and what kind of interfaces are used to enter or generate the queries. Joe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster