Re: [GENERAL] COPY command question

2009-03-17 Thread Raymond O'Donnell
On 17/03/2009 15:04, Ivano Luberti wrote:
 Thanks but it keeps on not finding the file: the warning has disappeared
 
 
 ERROR:  could not open file c:\temp\anagraficaANIDIs.csv for reading:
 No such file or directory

You haven't said whether the file is on the same machine as the server -
is this the case?

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command question

2009-03-17 Thread Ivano Luberti
I'm sorry, you are right that is the problem
I had interpreted that as the file should reside on the same machine
where pgAdmin (or another client) runs , not the server.

Thank you again


Raymond O'Donnell ha scritto:
 On 17/03/2009 15:04, Ivano Luberti wrote:
   
 Thanks but it keeps on not finding the file: the warning has disappeared


 ERROR:  could not open file c:\temp\anagraficaANIDIs.csv for reading:
 No such file or directory
 

 You haven't said whether the file is on the same machine as the server -
 is this the case?

 Ray.

 --
 Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
 r...@iol.ie
 Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
 --

   

-- 
==
dott. Ivano Mario Luberti
Archimede Informatica societa' cooperativa a r. l.
Sede Operativa
Via Gereschi 36 - 56126- Pisa
tel.: +39-050- 580959
tel/fax: +39-050-9711344
web: www.archicoop.it
==


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command question

2009-03-17 Thread Raymond O'Donnell
On 17/03/2009 15:28, Ivano Luberti wrote:
 I'm sorry, you are right that is the problem
 I had interpreted that as the file should reside on the same machine
 where pgAdmin (or another client) runs , not the server.
 
 Thank you again

You're welcome! That actually cost me a half-hour or so of frustration
not long agoso I was feeling your pain. :-)

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY questions

2009-02-18 Thread Jeff Davis
On Wed, 2009-02-18 at 11:56 -0700, Bill Todd wrote:
 If the COPY command fails does it identify the offending row?

Yes, it tries to identify the failing row in the error message.

 After reading the manual and the wiki I assume that there is no way to 
 tell copy to start with the Nth record in the input file. Is that 
 correct? It seems like such an obvious feature I was surprised not to 
 find it. Thanks.

That's correct.

There are a lot of features that people could find useful: various
formats and various manipulations of the data before it's processed. If
all of those features were implemented, COPY would start to look more
like perl.

In general, it's best to preprocess the data yourself and pipe the
result to a COPY ... FROM STDIN command. That way you can actually use
perl if you want to.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY questions

2009-02-18 Thread Adrian Klaver
On Wednesday 18 February 2009 10:56:45 am Bill Todd wrote:
 If the COPY command fails does it identify the offending row?

 After reading the manual and the wiki I assume that there is no way to
 tell copy to start with the Nth record in the input file. Is that
 correct? It seems like such an obvious feature I was surprised not to
 find it. Thanks.

 Bill

Take a look at:
http://pgfoundry.org/projects/pgloader/

It offers what you are looking for.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY questions

2009-02-18 Thread Bill Todd

Adrian Klaver wrote:

On Wednesday 18 February 2009 10:56:45 am Bill Todd wrote:
  

If the COPY command fails does it identify the offending row?

After reading the manual and the wiki I assume that there is no way to
tell copy to start with the Nth record in the input file. Is that
correct? It seems like such an obvious feature I was surprised not to
find it. Thanks.

Bill



Take a look at:
http://pgfoundry.org/projects/pgloader/

It offers what you are looking for.

  
Thanks for the suggestion but pgloader appears to be a Linux only 
solution and my environment is Windows. The other problem is that there 
is no documentation that I could find (other than a PDF made from slides).


Bill


Re: [GENERAL] COPY questions

2009-02-18 Thread Tony Caduto

Bill Todd wrote:
Thanks for the suggestion but pgloader appears to be a Linux only 
solution and my environment is Windows. The other problem is that 
there is no documentation that I could find (other than a PDF made 
from slides).


Bill

Bill,
pgloader is a Python app,  It should work on win32 as well.


Later,

Tony Caduto
AM Software Design
htpp://www.amsoftwaredesign.com
Home of Lightning Admin for PostgreSQL


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY questions

2009-02-18 Thread Adrian Klaver
On Wednesday 18 February 2009 2:00:19 pm Tony Caduto wrote:
 Bill Todd wrote:
  Thanks for the suggestion but pgloader appears to be a Linux only
  solution and my environment is Windows. The other problem is that
  there is no documentation that I could find (other than a PDF made
  from slides).
 
  Bill

 Bill,
 pgloader is a Python app,  It should work on win32 as well.


 Later,

 Tony Caduto
 AM Software Design
 htpp://www.amsoftwaredesign.com
 Home of Lightning Admin for PostgreSQL

Documentation:
http://pgloader.projects.postgresql.org/

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPy command question

2009-02-12 Thread Scott Marlowe
On Wed, Feb 11, 2009 at 11:22 AM, SHARMILA JOTHIRAJAH
sharmi...@yahoo.com wrote:
 Hi,
 A question about the Postgresql's COPY command.

 This is the syntax of this command from the manual

 COPY tablename [ ( column [, ...] ) ]
 FROM { 'filename' | STDIN }
 [ [ WITH ]
  .
 I want to migrate my tables from Oracle to Postgres.
 The COPY FROM command can take input from 'file' or 'STDIN'.
 Is it possible for the COPY command to take its input from a
 java program(which contains the oracle resultset) or any other way?

If that java program can provide direct input to postgresql then yes.
If everything has to be a prepared statement etc then no.  Assuming
your java framework allows you just throw input at the database, you'd
be able to just give it the input line by line.

 I know I could get the Oracle rows in a csv format but
 Im trying to get it done without any file in between ?

 In short is it possible to use this 'COPY' command to migrate my tables'
 data from Oracle to Postgresql without using any file
  in between?

Sure, I can do it in PHP.  I've done it in PHP.  If your java
connectors have the facility to throw raw sql at pgsql then it should
work.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPy command question

2009-02-12 Thread SHARMILA JOTHIRAJAH
Thanks all
This is my simple java code
public class copy{
  public static void main(String[] args) throws Exception
    {
  Connection connection1=null;
  Connection connection2=null;
  Statement stmt;
  String driverName1=org.postgresql.Driver;
  Class.forName(driverName2);
  connection1=DriverManager.getConnection(args[0],args[1],args[2]);
  pstmt=connection1.prepareStatement(select employee_id  
||','||employee_name from Employee);
  ResultSet rs1=pstmt.executeQuery();

  while (rs1.next())
  {
   System.out.println(rs1.getString(1));
    }
  stmt.close();
   connection1.close();
    }
}

And I pipe this to the psql like this
 ant/bin/ant copy -emacs | sed '1,3d'|sed '$d'|sed 'N;$!P;$!D;$d'|psql -c copy 
employee from STDIN WITH null 'NULL' DELIMITER ',' EMP

ant/bin/ant copy -emacs - I run it using ant
 sed '1,3d'|sed '$d'|sed 'N;$!P;$!D;$d  trim the unnecessary ant outputs 
like the 1st 2 lines and last 2 ines and any blank lines using 'sed'
so that my final output will be just the data with a 'comma' delimiter that I 
feed it to the psql COPY command...

It seems to work... I havent checked the performance for big tables...Im not 
sure how it scales for big tables... Do you know any other way of improving my 
java code to retrieve the data fast or in batches ?

Also does COPY treat timestamp  LOBs data different?

Thanks
Sharmila


--- On Thu, 2/12/09, Scott Marlowe scott.marl...@gmail.com wrote:
From: Scott Marlowe scott.marl...@gmail.com
Subject: Re: [GENERAL] COPy command question
To: sharmi...@yahoo.com
Cc: General postgres mailing list pgsql-general@postgresql.org
Date: Thursday, February 12, 2009, 1:35 PM

On Wed, Feb 11, 2009 at 11:22 AM, SHARMILA JOTHIRAJAH
sharmi...@yahoo.com wrote:
 Hi,
 A question about the Postgresql's COPY command.

 This is the syntax of this command from the manual

 COPY tablename [ ( column [, ...] ) ]
 FROM { 'filename' | STDIN }
 [ [ WITH ]
  .
 I want to migrate my tables from Oracle to Postgres.
 The COPY FROM command can take input from 'file' or
'STDIN'.
 Is it possible for the COPY command to take its input from a
 java program(which contains the oracle resultset) or any other way?

If that java program can provide direct input to postgresql then yes.
If everything has to be a prepared statement etc then no.  Assuming
your java framework allows you just throw input at the database, you'd
be able to just give it the input line by line.

 I know I could get the Oracle rows in a csv format but
 Im trying to get it done without any file in between ?

 In short is it possible to use this 'COPY' command to migrate my
tables'
 data from Oracle to Postgresql without using any file
  in between?

Sure, I can do it in PHP.  I've done it in PHP.  If your java
connectors have the facility to throw raw sql at pgsql then it should
work.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  

Re: [GENERAL] COPY command question

2009-02-11 Thread Sam Mason
On Wed, Feb 11, 2009 at 10:22:23AM -0800, Sharmila Jothirajah wrote:
 I want to migrate my tables from Oracle to Postgres. 
 The COPY FROM command can take input from 'file' or 'STDIN'.
 Is it possible for the COPY command to take its input from a
 java program(which contains the oracle resultset) or any other way?

STDIN just means from the same place as the rest of the SQL has come
from.  For example, if you're using JDBC, it would be possible do
generate a string containing:

  COPY tbl (col1,col2) FROM STDIN WITH CSV;
  1,2
  4,7
  12,37
  \.

and execute() the whole string.  There appear to be patches[1] available so
you can stream directly into the database rather than having to generate
a large strings to pass in.

-- 
  Sam  http://samason.me.uk/
 
 [1] http://kato.iki.fi/sw/db/postgresql/jdbc/copy/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command question

2009-02-11 Thread Kedar

Yes should work perfectly as suggested by Sam,

chk this for jdbc support:
http://kato.iki.fi/sw/db/postgresql/jdbc/copy/


Sam Mason wrote:


On Wed, Feb 11, 2009 at 10:22:23AM -0800, Sharmila Jothirajah wrote:
 

I want to migrate my tables from Oracle to Postgres. 
The COPY FROM command can take input from 'file' or 'STDIN'.

Is it possible for the COPY command to take its input from a
java program(which contains the oracle resultset) or any other way?
   



STDIN just means from the same place as the rest of the SQL has come
from.  For example, if you're using JDBC, it would be possible do
generate a string containing:

 COPY tbl (col1,col2) FROM STDIN WITH CSV;
 1,2
 4,7
 12,37
 \.

and execute() the whole string.  There appear to be patches[1] available so
you can stream directly into the database rather than having to generate
a large strings to pass in.

 




--
Thanks  Regards 


Kedar Parikh
Netcore Solutions Pvt. Ltd.

Tel: +91 (22) 6662 8135
Mob: +91 9819634734
Email: ke...@netcore.co.in
Web: www.netcore.co.in 



Re: [GENERAL] COPY ... FROM Permission denied ...

2009-01-06 Thread Scott Marlowe
On Tue, Jan 6, 2009 at 11:41 AM, Pedro Doria Meunier
pdo...@netmadeira.com wrote:
 Hi All,

 This is a bit embarassing ... but ...

 I have a partial set of data that I want to restore via COPY ... FROM command

 I have created a public folder for the effect and chown'ed both the folder and
 the file to be fed into COPY to a+rw ...

 I switched users with su - postgres and connected to the DB with the psql
 command

 All I'm getting is a Permission denied upon issuing the COPY command from
 within psql interactive terminal! :O

What is the exact error you're getting?

It's better to usually use copy from stdin which has none of these
problems.  It's the same syntax tat pg_dump uses when it creates a
backup.

For example:

COPY b (a_i, b) FROM stdin;
1   moreabc
\.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM Permission denied ...

2009-01-06 Thread Tom Lane
Pedro Doria Meunier pdo...@netmadeira.com writes:
 All I'm getting is a Permission denied upon issuing the COPY command from 
 within psql interactive terminal!

Since you didn't show what you did or what the error was, we're just
guessing ... but I'm going to guess that you should use \copy not COPY.
The server normally can't read files in your home directory.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY ... FROM Permission denied ...

2009-01-06 Thread Pedro Doria Meunier
Hi Scott

Txs for replying.

Anyway I've found the problem (silly me... (blush) )
It had to do (of course) with the forest perms in the folder tree ...
As soon as I moved the file into the data/ folder and executed the COPY ... 
FROM feeding it the file from that location everything worked as expected. :]

Thanks again.

Pedro Doria Meunier
GSM: +351961720188
Skype: pdoriam

On Tuesday 06 January 2009 06:48:47 pm Scott Marlowe wrote:
 On Tue, Jan 6, 2009 at 11:41 AM, Pedro Doria Meunier

 pdo...@netmadeira.com wrote:
  Hi All,
 
  This is a bit embarassing ... but ...
 
  I have a partial set of data that I want to restore via COPY ... FROM
  command
 
  I have created a public folder for the effect and chown'ed both the
  folder and the file to be fed into COPY to a+rw ...
 
  I switched users with su - postgres and connected to the DB with the psql
  command
 
  All I'm getting is a Permission denied upon issuing the COPY command from
  within psql interactive terminal! :O

 What is the exact error you're getting?

 It's better to usually use copy from stdin which has none of these
 problems.  It's the same syntax tat pg_dump uses when it creates a
 backup.

 For example:

 COPY b (a_i, b) FROM stdin;
 1 moreabc
 \.




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] COPY ... FROM Permission denied ...

2009-01-06 Thread Craig Ringer
Pedro Doria Meunier wrote:

 I have created a public folder for the effect and chown'ed both the folder 
 and 
 the file to be fed into COPY to a+rw ...

The server user (usually via the group or other permissions blocks)
must also have at least execute ('x') permissions on every directory
between the root directory (/) and the directory containing the files of
interest.

 All I'm getting is a Permission denied upon issuing the COPY command from 
 within psql interactive terminal! :O

Do you have SELinux on your system? It may be denying the server access
even though the plain UNIX permissions would otherwise allow it, because
the server normally has no good reason to be accessing files from
wherever you've put them.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy/delete issue

2008-12-23 Thread Herouth Maoz
Adrian Klaver wrote:

 On Sunday 21 December 2008 1:49:18 am Herouth Maoz wrote:
   
 Adrian Klaver wrote:
 
 Snip

 Are you sure the problem is not in $datefield = * . That the script
 that formats the data file is not correctly adding * to the right file.
 Seems almost like sometimes the second CMD is being run against the table
 that the first CMD should be run on. In other words it is not doing a
 complete delete , but a date based one, and you then import duplicate
 records.
   
 Thanks for your reply. The file containing the tables list is static -
 it doesn't change from one run to the next (unless I edit it personally).

 Herouth
 

 Well something is not static :) You mentioned this happens only with one 
 table. Have you tried running your procedure against that table only?
Well, every time this happens, I re-run the procedure, with all the
lines in the data files up to the given table deleted. And it works.
Then I restore the original data file. And the next day it works. It
only happens once in a while.
  Just 
 because a DELETE did not error does not mean it succeeded in the way you 
 wanted. You might want to throw a count() in the mix to see if you are really 
 clearing out the table the way you want to.
I wonder if there is a way to use the result of count() in \echo...
  Also is the actual data file static from one run to the next?
If you mean the data file that contains the list of tables, then yes. If
you mean the data in the table itself, then no, the data changes - new
records are added and old ones are updated.
  Would also help to see the schema for the 
 table involved and maybe a sample of the data, if that is possible.

   
A sample of the data would be a bit tricky, as this is customers'
private information. But the table schema is:

CREATE TABLE web1010.users
(
user_id CHAR(32)PRIMARY KEY NOT NULL,
whitelabel_id   NUMERIC(21) NOT NULL,
usernameVARCHAR(30) NOT NULL,
passwordCHAR(32)NOT NULL,
perms   VARCHAR(255)NOT NULL,
first_name  VARCHAR(40) NULL,
last_name   VARCHAR(40) NULL,
total_pointsINTEGER DEFAULT 0 NOT NULL,
date_createdTIMESTAMP   NOT NULL,
date_birth  TIMESTAMP   NULL,
gender  INTEGER NULL,
city_id NUMERIC(21) NULL,
is_active   SMALLINTNOT NULL,
email   VARCHAR(255)NULL,
subscriptin_id  NUMERIC(21) NULL,
subscriptin_num_of_msg  INTEGER NULL,
subscriptin_date_start  TIMESTAMP   NULL,
subscriptin_sent_datetime   TIMESTAMP   NULL,
subscriptin_credit_left INTEGER NULL,
subscriptin_status  INTEGER NULL,
subscriptin_sent_reference  NUMERIC(21) NULL,
first_time_subscribed   VARCHAR(10) NULL,
sms_credit  INTEGER NULL,
reg_pid NUMERIC(21) NULL,
spam_fl SMALLINTNULL,
constraint PK_USERS unique (whitelabel_id,username)
)
;

I suppose this doesn't happen with other tables in the process, because
most other tables don't have two unique constraints in them - most only
have the primary key. But still, if everything is deleted from the
table, this should not be an issue...

I might take Dennis Brakhane's advice and replace the DELETE command
with TRUNCATE, as I see no harm in doing so. Nevertheless, DELETE should
either work or fail saying could not delete because Otherwise
PostgreSQL is not a very reliable...


Thanks,

Herouth



Re: [GENERAL] Copy/delete issue

2008-12-23 Thread Adrian Klaver
On Tuesday 23 December 2008 6:43:56 am Herouth Maoz wrote:


 Well, every time this happens, I re-run the procedure, with all the
 lines in the data files up to the given table deleted. And it works.
 Then I restore the original data file. And the next day it works. It
 only happens once in a while.

See next comment.


   Also is the actual data file static from one run to the next?

 If you mean the data file that contains the list of tables, then yes. If
 you mean the data in the table itself, then no, the data changes - new
 records are added and old ones are updated.

I should have been more specific. You mentioned you repeat the procedure 5 
minutes or so after a failure.  Is there a change in the actual data between 
those runs?


   Would also help to see the schema for the
  table involved and maybe a sample of the data, if that is possible.

 A sample of the data would be a bit tricky, as this is customers'
 private information. But the table schema is:

 CREATE TABLE web1010.users
 (
 user_id CHAR(32)PRIMARY KEY NOT NULL,
 whitelabel_id   NUMERIC(21) NOT NULL,
 usernameVARCHAR(30) NOT NULL,
 passwordCHAR(32)NOT NULL,
 perms   VARCHAR(255)NOT NULL,
 first_name  VARCHAR(40) NULL,
 last_name   VARCHAR(40) NULL,
 total_pointsINTEGER DEFAULT 0 NOT NULL,
 date_createdTIMESTAMP   NOT NULL,
 date_birth  TIMESTAMP   NULL,
 gender  INTEGER NULL,
 city_id NUMERIC(21) NULL,
 is_active   SMALLINTNOT NULL,
 email   VARCHAR(255)NULL,
 subscriptin_id  NUMERIC(21) NULL,
 subscriptin_num_of_msg  INTEGER NULL,
 subscriptin_date_start  TIMESTAMP   NULL,
 subscriptin_sent_datetime   TIMESTAMP   NULL,
 subscriptin_credit_left INTEGER NULL,
 subscriptin_status  INTEGER NULL,
 subscriptin_sent_reference  NUMERIC(21) NULL,
 first_time_subscribed   VARCHAR(10) NULL,
 sms_credit  INTEGER NULL,
 reg_pid NUMERIC(21) NULL,
 spam_fl SMALLINTNULL,
 constraint PK_USERS unique (whitelabel_id,username)
 )
 ;

 I suppose this doesn't happen with other tables in the process, because
 most other tables don't have two unique constraints in them - most only
 have the primary key. But still, if everything is deleted from the
 table, this should not be an issue...

In the original post you said the constraint violation was on the PK. Is that 
the case or is it on PK_USERS?


 I might take Dennis Brakhane's advice and replace the DELETE command
 with TRUNCATE, as I see no harm in doing so. Nevertheless, DELETE should
 either work or fail saying could not delete because Otherwise
 PostgreSQL is not a very reliable...

Worth trying. However it does not answer the question of what is going on. 
While it is possible that there is a DELETE bug, I still believe it is  a 
case of  DELETE working in way you are not expecting.



 Thanks,

 Herouth



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy/delete issue

2008-12-21 Thread Herouth Maoz
Adrian Klaver wrote:
 Snip

 Are you sure the problem is not in $datefield = * . That the script that 
 formats the data file is not correctly adding * to the right file. Seems 
 almost like sometimes the second CMD is being run against the table that the 
 first CMD should be run on. In other words it is not doing a complete 
 delete , but a date based one, and you then import duplicate records.

   
Thanks for your reply. The file containing the tables list is static -
it doesn't change from one run to the next (unless I edit it personally).

Herouth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy/delete issue

2008-12-21 Thread Dennis Brakhane
(Sorry for the forward, I forgot to CC the list)

On Wed, Dec 17, 2008 at 9:38 AM, Herouth Maoz hero...@unicell.co.il wrote:
 and for non-transaction tables (ones that have records that might
 change but don't accumulate based on time) it's DELETE without WHERE.

In that case, you are better off using TRUNCATE instead.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy/delete issue

2008-12-21 Thread Adrian Klaver
On Sunday 21 December 2008 1:49:18 am Herouth Maoz wrote:
 Adrian Klaver wrote:
  Snip
 
  Are you sure the problem is not in $datefield = * . That the script
  that formats the data file is not correctly adding * to the right file.
  Seems almost like sometimes the second CMD is being run against the table
  that the first CMD should be run on. In other words it is not doing a
  complete delete , but a date based one, and you then import duplicate
  records.

 Thanks for your reply. The file containing the tables list is static -
 it doesn't change from one run to the next (unless I edit it personally).

 Herouth

Well something is not static :) You mentioned this happens only with one 
table. Have you tried running your procedure against that table only? Just 
because a DELETE did not error does not mean it succeeded in the way you 
wanted. You might want to throw a count() in the mix to see if you are really 
clearing out the table the way you want to. Also is the actual data file 
static from one run to the next? Would also help to see the schema for the 
table involved and maybe a sample of the data, if that is possible.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy/delete issue

2008-12-17 Thread Adrian Klaver
On Wednesday 17 December 2008 12:38:40 am Herouth Maoz wrote:
 I have a strange situation that occurs every now and again.

 We have a reports system that gathers all the data from our various
 production systems during the night, where we can run heavy reports on
 it without loading the production databases.

 I have two shell scripts that do this nightly transfer of data. The
 production database is Sybase. So I have a shell script that scans a
 list of tables and databases and dumps them into a format suitable for
 postgres COPY. After it dumps everything, another shell script scans the
 same list, and loads each dump file into the proper table.

 The shell script first runs psql with a DELETE command. For transaction
 tables (ones where data accumulates by date) the records for two days
 are deleted, and for non-transaction tables (ones that have records that
 might change but don't accumulate based on time) it's DELETE without WHERE.

 I run psql with ON_ERROR_STOP and check the exit status. If the DELETE
 failed, I should get an error status, so I do not proceed to the copy.

 Then I run psql again, with ON_ERROR_STOP, and run a \copy command that
 loads the data to the same table.

 For some reason, once in a while, that fails. Always on the same table -
 violating the unique constraint of the primary key. Now, this is
 impossible because there was a successful delete beforehand, as I said,
 and the data comes from a database where that same primary key is
 enforced. Moreover, when I re-run the script, everything runs fine.

 This happens at least once a week - always with the same table.

 Can anybody think of a reason why psql will not report an error on
 deletion? Or why it would tell me that a constraint has been violated
 when loading the same data 5 minutes later works fine?

 Thanks,
 Herouth



 Here is the relevant shell code (the relevant table has *' in the file
 for datefield):


Snip

 deletion.

 if [ $datefield = * ]
 then
 CMD=DELETE FROM $local_table
 else
 CMD=DELETE FROM $local_table WHERE $datefield = current_date - 2
 fi

 # Run the deletion command

 echo -e set ON_ERROR_STOP\\n$CMD; | $PSQLCMD -q -f - 
 $TMPFILE 21


Snip

Are you sure the problem is not in $datefield = * . That the script that 
formats the data file is not correctly adding * to the right file. Seems 
almost like sometimes the second CMD is being run against the table that the 
first CMD should be run on. In other words it is not doing a complete 
delete , but a date based one, and you then import duplicate records.



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY error with null date

2008-12-07 Thread Bill Todd

Joshua D. Drake wrote:

On Fri, 2008-12-05 at 12:00 -0700, Bill Todd wrote:
  
Joshua D. Drake wrote: 


On Thu, 2008-12-04 at 19:35 -0700, Bill Todd wrote:
  



null as IS NULL results in the following error.

ERROR:  syntax error at or near is
LINE 5: null as is null
^

** Error **

ERROR: syntax error at or near is
SQL state: 42601
Character: 109

Any other suggestions?




COPY foo FROM '/tmp/bar' NULL as 'NULL'

  

copy billing.contact from 'c:/export/contact.csv'
with csv delimiter as ','
null as 'NULL'
quote as '';

produces the same error. I am beginning to suspect this is impossible. :-(



Re: [GENERAL] COPY error with null date

2008-12-07 Thread Tom Lane
Bill Todd [EMAIL PROTECTED] writes:
 I am beginning to suspect this is impossible.

That's correct: see the COPY reference page.  A quoted value is never
considered to match the NULL string.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY error with null date

2008-12-04 Thread Joshua D. Drake
On Thu, 2008-12-04 at 19:35 -0700, Bill Todd wrote:
 Using 8.3.3 I am trying to import a CSV file using the following copy 
 command.
 
 copy billing.contact from 'c:/export/contact.csv'
 with delimiter as ','
 null as ''
 csv quote as '';
 
 The following record record causes an error because the third field, , 
 is a null date and causes the error following the record. How can I 
 change the copy command above so that a null date or number will be 
 imported as null? I do not care if empty strings are imported as an 
 empty string or a null.

null as IS NULL



 
 Bill
 
 IASAcctSys,09/09/1995,,...
 
 
 ERROR:  invalid input syntax for type date: 
 CONTEXT:  COPY contact, line 6, column date: 
 
 ** Error **
 
 ERROR: invalid input syntax for type date: 
 SQL state: 22007
 Context: COPY contact, line 6, column date: 
 
 
-- 
PostgreSQL
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY with a variable path

2008-11-27 Thread Raymond O'Donnell
On 27/11/2008 20:52, Bill Todd wrote:

 Substituting the input parameter for the literal path does not work and
 neither does using PREPARE/EXECUTE. How can I pass the file path as a
 parameter?

You could write a pl/pgsql function which constructs the query as a
string and then runs it with EXECUTE.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY with a variable path

2008-11-27 Thread Bill Todd

Raymond O'Donnell wrote:

On 27/11/2008 20:52, Bill Todd wrote:

  

Substituting the input parameter for the literal path does not work and
neither does using PREPARE/EXECUTE. How can I pass the file path as a
parameter?



You could write a pl/pgsql function which constructs the query as a
string and then runs it with EXECUTE.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


  
According to the PostgreSQL help file EXECUTE is used to execute a 
prepared statement. I tried that but when I call PREPARE with COPY as 
the statement I get an error. Are you saying that I can also use EXECUTE as


EXECUTE AStringVariable;

where AStringVariable contains a valid SQL statement? If so, that is the 
solution.


Bill


Re: [GENERAL] COPY with a variable path

2008-11-27 Thread Bill Todd

Raymond O'Donnell wrote:

On 27/11/2008 20:52, Bill Todd wrote:

  

Substituting the input parameter for the literal path does not work and
neither does using PREPARE/EXECUTE. How can I pass the file path as a
parameter?



You could write a pl/pgsql function which constructs the query as a
string and then runs it with EXECUTE.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


  

I tried your suggestion and it worked! Thanks.

Bill


Re: [GENERAL] COPY with a variable path

2008-11-27 Thread Raymond O'Donnell
On 27/11/2008 23:09, Bill Todd wrote:
 Raymond O'Donnell wrote:

 You could write a pl/pgsql function which constructs the query as a
 string and then runs it with EXECUTE.

   
 According to the PostgreSQL help file EXECUTE is used to execute a
 prepared statement. I tried that but when I call PREPARE with COPY as
 the statement I get an error. Are you saying that I can also use EXECUTE as

Sorry, Bill, I wasn't clear enough - I was suggesting writing a pl/pgsql
function something like this (not tested):

  create function do_copy(path text) returns void
  as
  $$
  begin
execute 'copy your_table from ' || filepath;
return;
  end;
  $$
  language plpgsql;


EXECUTE in pl/pgsql is different from the version you refer to
abovehere it is in the docs:

http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
 Is that the *first* error message you got?


Yes it is.

In fact I made a mistake in the first email, so instead:
INSERT INTO A ( Col1, Col2 )
VALUES (2, '-- any text' );

please change with:
INSERT INTO A ( Col1, Col2 )
VALUES (1, '-- any text' );

However I suppose this doesn't change the problem :(.

Regards,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Richard Huxton
Sabin Coanda wrote:
 Hi,
 
 I have PostgreSQL 8.3.5, compiled by Visual C++ build 1400 on Windows OS.
 
 I try to use the COPY command to optimize the backup/restore performance, 
 but I found a problem. I reproduce it below.

I can't reproduce it here on 8.3 on linux.

 I backup the database plain with the command:
  pg_dump.exe -U postgres -F p -v -f backup_plain.sql DemoDB
 
 I create a new database, and I run the script. But it rise me the error:
   ERROR:  syntax error at or near 1
   LINE 49: 1 -- any text
 
 I look for the error line and I saw how pg_dump created the script 
 statement:
   COPY A (Col1, Col2) FROM stdin;
   1 -- any text
   \.

That's what I see too, and it's fine here.

Try trimming the file down to just those lines, manually create the
database and table and see if you can run the copy then. If so, then
Tom's right and there's an error before the COPY. If not, then you've
got something odd in the file (bad line-ending, invalid high-bit
character or some such).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
Sorry, my fault that I run the script in the query window of pgAdmin, not in 
the system console. I check it again in the system console and it works 
well.

Thanks,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-19 Thread Tom Lane
Sabin Coanda [EMAIL PROTECTED] writes:
 I backup the database plain with the command:
  pg_dump.exe -U postgres -F p -v -f backup_plain.sql DemoDB

 I create a new database, and I run the script. But it rise me the error:
   ERROR:  syntax error at or near 1
   LINE 49: 1 -- any text

 I look for the error line and I saw how pg_dump created the script 
 statement:
   COPY A (Col1, Col2) FROM stdin;
   1 -- any text
   \.

Is that the *first* error message you got?

My guess is that something went wrong with the COPY command, so that
psql failed to switch into copy-data mode and is trying to interpret
the following stuff as SQL commands.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY TO duplicates \ signs

2008-11-05 Thread Albe Laurenz
Abraham, Danny wrote:
 String in DB:
 
 D:\Program Files\BMC Software\CONTROL-D\wa/reports
 
 In the output files \| are duplicated: The string in the output text
 fileis
 D:\\Program Files\\BMC Software\\CONTROL-D\\wa/reports
   == ==== ==
 
 Standard_conforming_strings will not help here.
 
 Any ideas? Help will be appreciated.

Can you reproduce this with psql?

Can you select length() of this field (to see if it is 50 characters
long or not)?

How are the output files generated?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY TO duplicates \ signs

2008-11-05 Thread Ivan Sergio Borgonovo
On Wed, 5 Nov 2008 05:06:36 -0600
Abraham, Danny [EMAIL PROTECTED] wrote:

 Hi,
 
 String in DB:
 
 D:\Program Files\BMC Software\CONTROL-D\wa/reports
 
 In the output files \| are duplicated: The string in the output
 text fileis
 D:\\Program Files\\BMC Software\\CONTROL-D\\wa/reports
   == ==== ==

try to play with the ESCAPE AS and set it as '' if you really want
to skip \.

http://www.postgresql.org/docs/8.1/static/sql-copy.html

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy, transactions and permissions

2008-08-17 Thread Ivan Sergio Borgonovo
On Wed, 13 Aug 2008 16:32:18 -0500
ries van Twisk [EMAIL PROTECTED] wrote:

 On Aug 13, 2008, at 4:25 PM, Ivan Sergio Borgonovo wrote:
  I need to write an import function with enough isolation from
  apache daemon.
  Code has no input other than cvs files and a signal about when to
  start the import.
  The sql code that will be executed will be static.
  I may end up writing a mini-daemon that just start a SQL script
  or just pool from cron and feed psql.
 
  If anyone has a better (lazier, cleaner) approach it will be very
  welcome.
 
 Lazier would be using JasperETL or any other ETL tool

*I did a simple test and \copy works as expected inside
transactions.*

But I wasn't able to split a \copy command across several lines to
improve readability and add comments.



Such ETL tools didn't seem suited for my current needs.

I downloaded a 190Mb app that has quite a bit of dependencies, tried
to run it and gave up the first time, unpacking took too long. I gave
a glimpse to the screenshot and I got the impression that at least
for me they have a harder learning curve than refreshing my awk
knowledge.

Out of curiosity I googled for etl postgresql and found kettle.
This one was a bit smaller download and I was able to run it
immediately. Still it looked more a tool suited for managers that
have to produce reports rather than for programmers.

They are surely more powerful than a 10 line awk script but I didn't
see a quick way to script them.

eg. if I had to import 10 tables I'd have to repeat the same
click-drag-fill-connect seqence 10 times.

I've used MS DTS in SQL 2000 and I've found it has a faster learning
curve even if it seems its not as flexible as JasperETL or kettle and
I've experienced the same problem of not being able to script it.

kettle export an xml file that could be taken as a template for some
scripting work.
I didn't understand how to execute the resulting plan outside the
GUI.

I'm wondering if I didn't give them enough time or they are
definitively for use case that rarely cross my way.

I haven't used dblink in Postgresql yet but I've found the ability to
connect to different data sources (including Access and Excel) of
those ETL interesting.


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy, transactions and permissions

2008-08-13 Thread ries van Twisk



On Aug 13, 2008, at 4:25 PM, Ivan Sergio Borgonovo wrote:

I need to write an import function with enough isolation from apache
daemon.
Code has no input other than cvs files and a signal about when to
start the import.
The sql code that will be executed will be static.
I may end up writing a mini-daemon that just start a SQL script or
just pool from cron and feed psql.

If anyone has a better (lazier, cleaner) approach it will be very
welcome.


Lazier would be using JasperETL or any other ETL tool

Ries



copy from file need superuser right. I'd like to avoid it.
copy from stdin looks just as a burden on me as a programmer.
\copy seems to be what I'm looking for.

Can I use \copy inside a transaction and is it going to be rolled
back if something goes wrong?

thanks

--
Ivan Sergio Borgonovo
http://www.webthatworks.it




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy fails

2008-08-01 Thread Craig Ringer
Abraham, Danny wrote:
 I am loading a huge file using C, STDIN

Using C?

Have you written a C program using libpq to load some data, which it
reads from its stdin?

Or do you mean COPY FROM STDIN ?

Something else?

Perhaps if you provided a clearer and more complete explanation of your
problem you might get a more useful answer.

 The program fails immediately on canceling statement due to statement
 timeout

Do you have a statement timeout set in postgresql.conf ?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy ... from stdin csv; and bytea

2008-07-28 Thread Klint Gore

David Wilson wrote:

On Mon, Jul 28, 2008 at 1:24 AM, Klint Gore [EMAIL PROTECTED] wrote:
 Try just a single \

 e.g.
 ge.xls,application/vnd.ms-excel,71168,\320\317\021\340\241[snip]

Thanks- I did try that, and it at least gave the expected output from
select, but is there a way to verify that it's actually handling it
correctly rather than simply storing the sequence of characters? I'm
not certain how to check the actual byte width of a column within a
row, and I'd *really* rather not be storing 4 bytes for every 1 in the
binary if I can avoid it- this column is already going to be doubling
field width; quadrupling it would give me space headaches I really
don't want to deal with. :)

  

select length(bytea_field) from table

You could use ||pg_relation_size|(|text|)| or 
||pg_total_relation_size|(|text|) |to see how much disk space it takes up.


You can play with the storage settings for the column if you want to try 
and handle the space better. see alter table set storage.


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy ... from stdin csv; and bytea

2008-07-28 Thread Tom Lane
Klint Gore [EMAIL PROTECTED] writes:
 David Wilson wrote:
 I'm not certain how to check the actual byte width of a column within a
 row,

 select length(bytea_field) from table

If you want the actual on-disk footprint, use pg_column_size()

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy ... from stdin csv; and bytea

2008-07-28 Thread Klint Gore

Tom Lane wrote:

Klint Gore [EMAIL PROTECTED] writes:
 David Wilson wrote:
 I'm not certain how to check the actual byte width of a column within a
 row,

 select length(bytea_field) from table

If you want the actual on-disk footprint, use pg_column_size()

  
Size on disk would have the compression from the default storage = 
extended wouldn't it?


I verified it for myself manually anyway.

copy (select * from original limit 5) to stdout with csv;

create table foo (like original);
alter table foo alter column bytea_field set storage external;
copy foo from stdin with csv;

select |reltoastrelid from pg_class where relanem = 'original'

found the file for it and looked at it with a hex viewer.
|
klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy ... from stdin csv; and bytea

2008-07-27 Thread Klint Gore

David Wilson wrote:

My application is adding a bytea column to a table into which data is
dumped in approximately 4k row batches, one batch approximately every
10 seconds. To this point, those dumps have used copy from stdin;
however, I'm having some difficulty getting bytea encodings to work
with it. Specifically, I can't seem to get the parser to recognize
that what I'm handing it is an escaped string that needs to be parsed
back into individual bytes rather than stored as-is. The encoding is
very straightforward for INSERT, of course, but the COPY ... FROM
STDIN CSV doesn't seem to want to work no matter what I've tried:

\\000
\\000
E'\\000'

etc.

Is there a trick to this that I just didn't see in the documentation,
or is this some limitation of CSV copy-in? If the latter, are there
suggestions for workarounds other than to fallback on the inserts?
  

Try just a single \

e.g.
ge.xls,application/vnd.ms-excel,71168,\320\317\021\340\241[snip]

klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy ... from stdin csv; and bytea

2008-07-27 Thread David Wilson
On Mon, Jul 28, 2008 at 1:24 AM, Klint Gore [EMAIL PROTECTED] wrote:
 Try just a single \

 e.g.
 ge.xls,application/vnd.ms-excel,71168,\320\317\021\340\241[snip]

Thanks- I did try that, and it at least gave the expected output from
select, but is there a way to verify that it's actually handling it
correctly rather than simply storing the sequence of characters? I'm
not certain how to check the actual byte width of a column within a
row, and I'd *really* rather not be storing 4 bytes for every 1 in the
binary if I can avoid it- this column is already going to be doubling
field width; quadrupling it would give me space headaches I really
don't want to deal with. :)

-- 
- David T. Wilson
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY between 7.4.x and 8.3.x

2008-07-22 Thread Francisco Reyes
On 6:01 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
 to this:
 psql -h $SOURCE_HOST ... -c copy binary $SOURCE_SCHEMA.$SOURCE_T
 ABLE to
 stdout |\
  psql ... -c copy binary $TARGET_SCHEMA.$TARGET_TABLE from stdin

http://www.postgresql.org/docs/8.3/interactive/sql-copy.html

The BINARY key word causes all data to be stored/read as binary format
rather than as text. It is somewhat faster than the normal text mode, but a
binary-format file is less portable across machine architectures and
PostgreSQL versions.

I would suggest to not go that route.
However, you could just test it and see if it works.
If you are doing multiple tables I still think you should consider pg_dump
-Fc. You can restore just the data without the DDL.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY between 7.4.x and 8.3.x

2008-07-21 Thread Francisco Reyes
On 4:05 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
 We will now be adding 8.3.x databases to the mix, and will need to
 copy between 7.4.x and 8.3.x in both directions. The datatypes we use

I believe it should work.
Also, one feature I believe started in the 8.X line (8.2?), is the ability
to have a subquery in the copy command to refine what rows you are getting.

 What if we do a binary copy instead? 
What do you mean by a binary copy?
pg_dump -Fc?

I think a plain pg_dump and copy will likely be more portable. Specially
going from 8.3 to 7.4.

Why will you keep copying data back and forth?
Not possible to setup a new 8.3, migrate to it and then upgrade the other
7.4 machine to 8.3?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY between 7.4.x and 8.3.x

2008-07-21 Thread Jack Orenstein

Francisco Reyes wrote:

On 4:05 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:


What if we do a binary copy instead? 

What do you mean by a binary copy?
pg_dump -Fc?


No, I mean changing this:

psql -h $SOURCE_HOST ... -c copy $SOURCE_SCHEMA.$SOURCE_TABLE to stdout |\
psql ... -c copy $TARGET_SCHEMA.$TARGET_TABLE from stdin

to this:

psql -h $SOURCE_HOST ... -c copy binary $SOURCE_SCHEMA.$SOURCE_TABLE to 
stdout |\

psql ... -c copy binary $TARGET_SCHEMA.$TARGET_TABLE from stdin



Why will you keep copying data back and forth?
Not possible to setup a new 8.3, migrate to it and then upgrade the other
7.4 machine to 8.3?


We're migrating a cluster from 7.4 to 8.3. To maintain availability, we need the 
7.4 and 8.3 databases up at the same time. We're copying data across versions 
for this reason.


Jack

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Hans Zaunere
   We're using a statement like this to dump between 500K and 5
  million rows.
 
   COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
TO '/dev/shm/SomeFile.csv'
 
   Upon first run, this operation can take several minutes.  Upon
   second run, it will be complete in generally well under a minute.
 
 
 Almost certainly a buffering issue.  First time it's reading the file
 into memory WHILE also doing other things, file system wise.  Second
 time it's in memory (kernel cache) and zips right by.

 What can you do? First you need to see what's really happening, which
 means learning how to drive vmstat, iostat, top, etc to see what's
 happening on your machine.  You'll likely want to look into doing
 something that will reduce contention on the database partition set
 for starters.  Table spaces, big RAID arrays (big meaning a lot of
 spindles), battery backed RAID controller.

We do have some pretty fair hardware:

-- PowerEdge R900 with 132gb and 16 cores

-- PowerVault MD1000 with 1.6 TB

I am getting more exact information from the NOC.

The contention might not be an issue.  I ran a test last night with zero
activity on the server, and the results were the same.  In fact, even the
second run took the same amount of time as the first run.

The queries, however, are always very simple - a single column equal to or
greater than a single value.  And typically the column being either an
integer or a one byte char.

Would a cluster index or ordered table help here - partitioning - something
to get sequential reads from the disk?  I could experiment, but alas we only
have this one production environment right now, so I'm looking for low
hanging fruit from prior experiences.

The table is about 600+ columns wide with a total of 32 million rows.

Would SELECTing into a temporary table first, then doing the COPY(), make
any difference?

Regards,

H


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Hans Zaunere
  We're using a statement like this to dump between 500K and 5 million
  rows.
 
  COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
TO '/dev/shm/SomeFile.csv'
 
  Upon first run, this operation can take several minutes.  Upon second
  run, it will be complete in generally well under a minute.
 
 Hmmm ... define first versus second.  What do you do to return it
 to the slow state?

Interesting that you ask.  I haven't found a very reliable way to reproduce
this.

Typically, just waiting a while to run the same query the second time will
reproduce this behavior.  I restarted postgresql and it was reproduced as
well.  However, I can't find a way to flush buffers/etc, to reproduce the
problem on-demand.  The first time vs second time is usually simply defined
as the temp. CSV file being created slowly once, and then fast, although
frankly sometimes it's always slow, and sometimes seemingly a bit quicker,
depending on the query.

As I mentioned in my other post, the query is always of the simple form
above - single column greater than or equal to a single value (int or single
byte char).

We had set shared_buffers to 1028M, which at first seemed to make a
significant difference.  With some testing, various queries ran under 20
seconds, the first time, and the second time at the same speed or a bit
quicker.

However, operations then restarted the system (and downgraded to Red Hat
5.0) and now we're back where we started.  Even with the increased
shared_buffers, things are slow - perhaps I'll try to increase it more.

Upon first starting postgresql, and executing the above statement, the
process begins to grow to 1gb and then stops.  When it stops, the CSV file
starts to trickle out to disk.

Best,

H



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Scott Marlowe
On Mon, May 5, 2008 at 6:18 AM, Hans Zaunere [EMAIL PROTECTED] wrote:
   We're using a statement like this to dump between 500K and 5 million
rows.
  
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
  TO '/dev/shm/SomeFile.csv'
  
Upon first run, this operation can take several minutes.  Upon second
run, it will be complete in generally well under a minute.
  
   Hmmm ... define first versus second.  What do you do to return it
   to the slow state?

  Interesting that you ask.  I haven't found a very reliable way to reproduce
  this.

  Typically, just waiting a while to run the same query the second time will
  reproduce this behavior.  I restarted postgresql and it was reproduced as
  well.  However, I can't find a way to flush buffers/etc, to reproduce the

what happens if you do something like:

select count(*) from (select ...);

i.e. don't make the .csv file each time.  How's the performance
without making the csv versus making it?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Scott Marlowe
On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
 Hello,

  We're using a statement like this to dump between 500K and 5 million rows.

  COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
   TO '/dev/shm/SomeFile.csv'

Wait, are you really creating a .csv file in shared memory?  Can such
a thing even work?

If you're not creating a file in /dev/shm, where is it being built?
On the main OS drive?  the battery backed cached RAID array?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Scott Marlowe
On Mon, May 5, 2008 at 9:03 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
   Hello,
  
We're using a statement like this to dump between 500K and 5 million 
 rows.
  
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
 TO '/dev/shm/SomeFile.csv'

  Wait, are you really creating a .csv file in shared memory?  Can such
  a thing even work?

  If you're not creating a file in /dev/shm, where is it being built?
  On the main OS drive?  the battery backed cached RAID array?

OK, looked it up.  you're making your .csv file in a ramdisk?  If it
gets big it's gonna make the machine start swapping.  i'd suggest
storing only small things in a ram disk really.  Other than that, I'm
not sure what the problem is.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-05 Thread Hans Zaunere
We're using a statement like this to dump between 500K and 5
million rows.
   
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
  TO '/dev/shm/SomeFile.csv'
   
Upon first run, this operation can take several minutes.  Upon
second run, it will be complete in generally well under a minute.
   
   Hmmm ... define first versus second.  What do you do to return
   it to the slow state?
 
   Interesting that you ask.  I haven't found a very reliable way to
   reproduce this.
 
   Typically, just waiting a while to run the same query the second
   time will reproduce this behavior.  I restarted postgresql and i
   was reproduced as well.  However, I can't find a way to flush
   buffers/etc, to reproduce the
 
 what happens if you do something like:
 
 select count(*) from (select ...);
 
 i.e. don't make the .csv file each time.  How's the performance
 without making the csv versus making it?

It's the same.

And regarding the /dev/shm, we do watch that memory doesn't become
contentious.  We've also done the dump to another set of disk spindles, and
we've seen the same performance.

So at the end of the day, it certainly does seem like a read-bottleneck off
of the disks.  Unfortunately, from a hardware perspective, there's not much
we can do about it currently.

Does anyone have any experiences they can share about using partitioning or
index tricks to speed up what should be basically large contiguous rows from
a table, based on a single column WHERE constraint?

H



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-04 Thread Tom Lane
Hans Zaunere [EMAIL PROTECTED] writes:
 We're using a statement like this to dump between 500K and 5 million rows.

 COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
   TO '/dev/shm/SomeFile.csv'

 Upon first run, this operation can take several minutes.  Upon second run,
 it will be complete in generally well under a minute.

Hmmm ... define first versus second.  What do you do to return it to
the slow state?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY Performance

2008-05-04 Thread Scott Marlowe
On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
 Hello,

  We're using a statement like this to dump between 500K and 5 million rows.

  COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn  '0')
   TO '/dev/shm/SomeFile.csv'

  Upon first run, this operation can take several minutes.  Upon second run,
  it will be complete in generally well under a minute.


Almost certainly a buffering issue.  First time it's reading the file
into memory WHILE also doing other things, file system wise.  Second
time it's in memory (kernel cache) and zips right by.

What can you do? First you need to see what's really happening, which
means learning how to drive vmstat, iostat, top, etc to see what's
happening on your machine.  You'll likely want to look into doing
something that will reduce contention on the database partition set
for starters.  Table spaces, big RAID arrays (big meaning a lot of
spindles), battery backed RAID controller.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY to STDOUT and pipes

2008-04-15 Thread Craig Ringer
kevin kempter wrote:

 Any thoughts on what I'm doing wrong?

I suspect that pg_dump is going to do a better job than using psql to
generate the input for the remote load. pg_dump can dump single tables
and can use COPY style data formatting.

As for why your current command isn't working ... You omitted the SQL in
`file2.sql' that you use for the restore, which makes things harder. At
a guess I'd say the stdin the input copy is seeing is expected to be
the text directly following the COPY ... FROM command in the input file,
rather than the psql command's stdin.

I do have one suggestion that's ugly but may work if you can't figure
out what's going wrong with the existing method and you're not happy
with using pg_dump for some reason:

You could potentially insert psql \echo commands into the first psql
command, so the command you're using to extract the data produces a
valid sequence of SQL commands that the second psql can read from stdin
(instead of using -f to read a command file). So if `file1.sql' becomes:

\echo 'COPY tablename FROM STDIN;'
copy (
select
  cust_id,
  cust_name,
  last_update_dt
from sl_cust
)
to STDOUT
with delimiter '|'
\echo '\\\.'

then you might be able to use a command line like:

psql -f file1.sql | psql -h newhost

(note that the second psql is reading the first one's stdout as its stdin).

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY to STDOUT and pipes

2008-04-15 Thread Klint Gore

kevin kempter wrote:

Hi List;


I want to run a copy (based on a select) to STDOUT and pipe it to a 
psql copy from STDIN on a different host.


here's what I have:

1) a .sql file that looks like this:

copy (
select
  cust_id,
  cust_name,
  last_update_dt
from sl_cust
)
to STDOUT
with delimiter '|'


This works.

However I want to pipe the resulting data into a psql statement that 
does a copy IN to a table on another host. I can't seem to get it 
right. I tried this:


psql -f file1.sql | psql -h newhost -f file2.sql

where file1.sql is the copy statement above and file2.sql does a copy 
table from STDIN with delimiter '|'


Any thoughts on what I'm doing wrong?
no database unless your logged in as someone with the same name as the 
databases on both servers? 

C:\psql -d gpdms_nunit -c copy (select * from pg_class) to stdout with 
delimiter '|' | psql -d gpdms -c create table foo (like pg_class); 
copy foo from stdin with delimiter '|';


works for me on 8.3.0 win32

klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy with escape

2008-02-25 Thread Tom Lane
blackwater dev [EMAIL PROTECTED] writes:
 I have data that I'm running through pg_escape_sting in php and then adding
 to stdin for a copy command.  The problem is O'reilly is being changed to
 O''Reilly in the string and then in the db.

pg_escape_string is designed to produce a string properly quoted for use
as a literal in a SQL command.  It is completely wrong for data that is
to go into COPY input.  I kinda doubt that PHP has anything built-in
that's suitable for COPY, though I could be wrong ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] copy with escape

2008-02-25 Thread brian



blackwater dev wrote:

I have data that I'm running through pg_escape_sting in php and then adding
to stdin for a copy command.  The problem is O'reilly is being changed to
O''Reilly in the string and then in the db.  I saw with the copy command I
can specify the escape but it isn't working for me.  Should this command fix
this double 'single' quote issue when I put it in the db?  And what is the
proper syntax?

COPY mytable FROM stdin with escape





COPY mytable (...) FROM STDIN WITH CSV ESCAPE ';

http://www.postgresql.org/docs/8.3/static/sql-copy.html

But CSV comes with a lot of baggage. You'd be far better off doing 
tab-delimited, unquoted fields (if you have no tabs in your data).


COPY mytable (...) FROM STDIN;
...
\.

I can't remember precisely all of what pg_escape_string() does, but if 
you need it for something else you could always do this afterward ;-)


implode(\t, str_replace('', ', $row))

b

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] copy question - fixed width?

2008-02-11 Thread Dimitri Fontaine
Le lundi 11 février 2008, Klint Gore a écrit :
 Is there any way to make copy work with fixed width files?

I'll try to see about implementing this in pgloader, shouldn't be complex. But 
we have some other things on the TODO (which could get formalized by now...).

So at the moment the preprocessing sed script is a better idea than waiting 
for the pgloader release which will be able to process no-separator 
fixed-field length input files.

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] copy question - fixed width?

2008-02-10 Thread Tom Lane
Klint Gore [EMAIL PROTECTED] writes:
 Is there any way to make copy work with fixed width files?

I'd suggest using a simple sed script to convert the data into the
format COPY understands.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] COPY with composite type column

2007-12-26 Thread Reg Me Please
Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto:
 Hi all.

 I have this composite type:

 create type ct as (
   ct1 text,
   ct2 int
 );

 Then I have this table

 create table atable (
   somedata numeric,
   otherdata text,
   compo ct
 );

 when I try to COPY data to that table and use the following query
 I get a syntax error message:

 COPY atable( somedata,(ct).ct1 ) from stdin;
 ERROR:  syntax error at or near (
 LINE 1: COPY atable( somedata,(ct).ct1 ) from stdin;
   ^

 The caret points to the open parenthesis in (ct).
 Same error is reported on the first open parenthesis if I write
 ((ct.).ct1).

 Any hint on how to write this COPY?

OK.
I've managed to walk the first step.
The correct grammar is

COPY atable( somedata,ct ) FROM STDIN;

That is you have to consider the full composed type column.
As stated into the COPY manual.
The data to be entered for the composed type column is to be enclosed within
paretheses. The single sub-columns are to be comma separated (!).

Is there a way to just enter some of the composed types composing columns?

-- 
Reg me, please!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] COPY with composite type column

2007-12-26 Thread Reg Me Please
Il Wednesday 26 December 2007 14:31:04 Reg Me Please ha scritto:
 Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto:
  Hi all.
 
  I have this composite type:
 
  create type ct as (
ct1 text,
ct2 int
  );
 
  Then I have this table
 
  create table atable (
somedata numeric,
otherdata text,
compo ct
  );
 
  when I try to COPY data to that table and use the following query
  I get a syntax error message:
 
  COPY atable( somedata,(ct).ct1 ) from stdin;
  ERROR:  syntax error at or near (
  LINE 1: COPY atable( somedata,(ct).ct1 ) from stdin;
^
 
  The caret points to the open parenthesis in (ct).
  Same error is reported on the first open parenthesis if I write
  ((ct.).ct1).
 
  Any hint on how to write this COPY?

 OK.
 I've managed to walk the first step.
 The correct grammar is

 COPY atable( somedata,ct ) FROM STDIN;

 That is you have to consider the full composed type column.
 As stated into the COPY manual.
 The data to be entered for the composed type column is to be enclosed
 within paretheses. The single sub-columns are to be comma separated (!).

 Is there a way to just enter some of the composed types composing columns?

The answer is YES, by leaving empty values between the commas.
I found it by test-and-try: I've been too lazy to dig very deeply into
the 8.2 manual.

What instead it seems to be not possible is to define default values
for every single sub-column of a composite type.
Right?

Sorry for this autoanswering, though!

-- 
Reg me, please!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] copy database by copying datafiles ?

2007-12-24 Thread Thomas Hart

I expect that it is not quite as easy as that.

My advice (as a non-expert) would be to install the same version of pg 
onto the target machine, and use etl 
(http://en.wikipedia.org/wiki/Extract,_transform,_load) to transfer the 
data. Basically you just need a small script (I like PHP myself, but 
it's definitely not always the best choice) to read the data from the 
source db, and insert it into the target db. Of course with 85gb of 
data, this could be a challenge, but I suspect your method of copying 
the data directory is not sufficient.


Perhaps you could instead write a script that pg_dump's a table at a 
time, and loads it into the target db (you could use pg_dumpall with 
appropriate flags to export the db/table structure only, and load from 
there).


In the end, as I said, I'm definitely not an expert. However, this is 
how I would go about this task. Hopefully I was at least able to give 
you some ideas.


Alex Vinogradovs wrote:

Guys,

I've created a copy of my database to run on a different server
by copying entire data directory while database instance was stopped.
Database generally works on the target machine, but I'm getting
following error when trying to create a plpgsql function :

ERROR: could not open relation 1664/0/1214: No such file or directory
SQL state: 58P01

I've checked, all the datafiles were coppied correctly. I'm not familiar
with storage internals, maybe there are some ties to inode numbers,
and thus such procedures aren't allowed at all ?

P.S. datafiles are 85GB in size, I couldn't really dump and restore...

Thank you!


Best regards,
Alex Vinogradovs

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/
  



--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] copy database by copying datafiles ?

2007-12-24 Thread Richard Broersma Jr
--- On Mon, 12/24/07, Alex Vinogradovs [EMAIL PROTECTED] wrote:

 P.S. datafiles are 85GB in size, I couldn't really dump
 and restore...

Don't for get the steps of compressing and uncompressing between dump and 
restore.;)  If the file is still too big, you can always use tar to spit the 
file up to make the transition a bit more manageable.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] copy database by copying datafiles ?

2007-12-24 Thread Alex Vinogradovs
Sorry guys, was my mistake... I found one file missing in global
tablespace. Copying it there fixed the problem.

Thanks everyone!



On Mon, 2007-12-24 at 14:07 -0800, Richard Broersma Jr wrote:
 --- On Mon, 12/24/07, Alex Vinogradovs [EMAIL PROTECTED] wrote:
 
  P.S. datafiles are 85GB in size, I couldn't really dump
  and restore...
 
 Don't for get the steps of compressing and uncompressing between dump and 
 restore.;)  If the file is still too big, you can always use tar to spit the 
 file up to make the transition a bit more manageable.
 
 Regards,
 Richard Broersma Jr.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] copy a large table raises out of memory exception

2007-12-13 Thread Tomasz Ostrowski
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:

 We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
 which is exported to a binary file.

How was it exported? With COPY tablename TO 'filename' WITH BINARY?

The BINARY key word causes all data to be stored/read as binary
format rather than as text. It is somewhat faster than the normal
text mode, but a binary-format file is less portable across machine
architectures and PostgreSQL versions.
http://www.postgresql.org/docs/8.2/static/sql-copy.html
Maybe you are bitten by this less portable.

 When we run copy tablename from filepath command, (...) and
 postgre raises exception out of memory.

I'd try to use pg_dump/pg_restore in custom format, like this:
pg_dump -a -Fc -Z1 -f [filename] -t [tablename] [olddatabasename]
pg_restore -1 -a -d [newdatabasename] [filename]

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] COPY speedup

2007-12-13 Thread Tom Lane
Reg Me Please [EMAIL PROTECTED] writes:
 In order to speed up the COPY ... FROM ... command, I've
 disabled everything (primary key, not null, references, default and indexes)
 in the table definition before doing the actual COPY.
 Later I can restore them with ALTER TABLE ... and CREATE INDEX ...

 My question is: is all this necessary, or could I save some of them (maybe
 just the DEFAULT) with no speed cost?

Indexes and foreign key references are the only things that benefit
from this treatment.  DEFAULTs are irrelevant to a COPY, and simple
constraints (NOT NULL and CHECK) are not any faster to verify later
--- which makes dropping them slower, since you'll need an additional
table scan to verify them when they're re-added.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY speedup

2007-12-13 Thread Reg Me Please
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
 Reg Me Please [EMAIL PROTECTED] writes:
  In order to speed up the COPY ... FROM ... command, I've
  disabled everything (primary key, not null, references, default and
  indexes) in the table definition before doing the actual COPY.
  Later I can restore them with ALTER TABLE ... and CREATE INDEX ...
 
  My question is: is all this necessary, or could I save some of them
  (maybe just the DEFAULT) with no speed cost?

 Indexes and foreign key references are the only things that benefit
 from this treatment.  DEFAULTs are irrelevant to a COPY, and simple
 constraints (NOT NULL and CHECK) are not any faster to verify later
 --- which makes dropping them slower, since you'll need an additional
 table scan to verify them when they're re-added.

   regards, tom lane

I'd suppose that foreign keys are to be disabled in order to speed things
up. Right?

-- 
Reg me, please!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] COPY speedup

2007-12-13 Thread Merlin Moncure
On Dec 13, 2007 4:31 PM, Reg Me Please [EMAIL PROTECTED] wrote:
 Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
  Reg Me Please [EMAIL PROTECTED] writes:
   In order to speed up the COPY ... FROM ... command, I've
   disabled everything (primary key, not null, references, default and
   indexes) in the table definition before doing the actual COPY.
   Later I can restore them with ALTER TABLE ... and CREATE INDEX ...
  
   My question is: is all this necessary, or could I save some of them
   (maybe just the DEFAULT) with no speed cost?
 
  Indexes and foreign key references are the only things that benefit
  from this treatment.  DEFAULTs are irrelevant to a COPY, and simple
  constraints (NOT NULL and CHECK) are not any faster to verify later
  --- which makes dropping them slower, since you'll need an additional
  table scan to verify them when they're re-added.
 
regards, tom lane

 I'd suppose that foreign keys are to be disabled in order to speed things
 up. Right?

pg_restore has a --disable-triggers option which you can use to do
this in some cases.  otherwise you can make a simple function wrapper
to do this with some dynamic sql which disables the triggers for
you...

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-29 Thread Lew

Lew wrote:

Try eliminating the double quotes in the CSV file.  Wannabe NULL
would then be ,, (consecutive commas)
in the CSV.  From the docs, you don't even need the NULL AS
clause in your COPY statement.


Ivan Sergio Borgonovo wrote:

Exactly what I did because fortunately there weren't too many chances
of weird stuff in 2000 records (sed -e 's/,/,/g').


And this worked, right?


Anyway with NULL AS '' and without it I can't still import NULL
without pre-processing.


What pre-processing is that?


I thought it may be missing total support of UTF-8 or if I did
something wrong or it is actually a feature.


This clearly has nothing to do with UTF-8, and everything to with comma-comma 
representing a NULL and comma-quote-quote-comma representing an empty string.


--
Lew
This post contained two requests for responses.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-29 Thread Ivan Sergio Borgonovo
On Tue, 27 Nov 2007 21:12:00 -0500
Lew [EMAIL PROTECTED] wrote:

 Lew wrote:
  Try eliminating the double quotes in the CSV file.  Wannabe
  NULL would then be ,, (consecutive commas)
  in the CSV.  From the docs, you don't even need the NULL AS
  clause in your COPY statement.
 
 Ivan Sergio Borgonovo wrote:
  Exactly what I did because fortunately there weren't too many
  chances of weird stuff in 2000 records (sed -e 's/,/,/g').
 
 And this worked, right?

right and I call it pre-processing.

  I thought it may be missing total support of UTF-8 or if I did
  something wrong or it is actually a feature.

 This clearly has nothing to do with UTF-8, and everything to with
 comma-comma representing a NULL and comma-quote-quote-comma
 representing an empty string.

OK... misinterpreted.

I thought that NULL AS '' means ,'',[1] so that empty strings could be
imported as NULL if necessary and as empty string if not.

So at my understanding there is no way to use \copy and insert NULL
when it encounter an empty string and NULL AS '' doesn't do anything
in CSV mode since when I have ,, it actually imports NULL and when I
have ,'', it imports empty strings that is the same behaviour I get
without NULL AS ''.

Correct?
If it is I found the documentation a bit misleading. I admit it could
be due to not being English mother tongue.

thx

[1] I did try with '', '', '\\'...

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-29 Thread Adrian Klaver
On Thursday 29 November 2007 2:40 pm, Ivan Sergio Borgonovo wrote:
 On Tue, 27 Nov 2007 21:12:00 -0500

 Lew [EMAIL PROTECTED] wrote:
  Lew wrote:
   Try eliminating the double quotes in the CSV file.  Wannabe
   NULL would then be ,, (consecutive commas)
   in the CSV.  From the docs, you don't even need the NULL AS
   clause in your COPY statement.
 
  Ivan Sergio Borgonovo wrote:
   Exactly what I did because fortunately there weren't too many
   chances of weird stuff in 2000 records (sed -e 's/,/,/g').
 
  And this worked, right?

 right and I call it pre-processing.

   I thought it may be missing total support of UTF-8 or if I did
   something wrong or it is actually a feature.
 
  This clearly has nothing to do with UTF-8, and everything to with
  comma-comma representing a NULL and comma-quote-quote-comma
  representing an empty string.

 OK... misinterpreted.

 I thought that NULL AS '' means ,'',[1] so that empty strings could be
 imported as NULL if necessary and as empty string if not.

 So at my understanding there is no way to use \copy and insert NULL
 when it encounter an empty string and NULL AS '' doesn't do anything
 in CSV mode since when I have ,, it actually imports NULL and when I
 have ,'', it imports empty strings that is the same behaviour I get
 without NULL AS ''.

 Correct?
 If it is I found the documentation a bit misleading. I admit it could
 be due to not being English mother tongue.

 thx

 [1] I did try with '', '', '\\'...

I tried this with 8.2 on a test case. To get it to work I needed to escape the 
quotes:
\copy tablename from 'filename.csv' WITH NULL as E'\'\'' CSV HEADER
This uses the new escape string syntax in 8.2. With versions before I believe 
the following would work:
\copy tablename from 'filename.csv' WITH NULL as '\'\'' CSV HEADER

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-27 Thread Lew

Ivan Sergio Borgonovo wrote:

I'd expect this:

\copy tablename from 'filename.csv' WITH NULL as '' CSV HEADER

whould import  as NULL.

The input file is UTF-8 (not Unicode/UTF-16).
I checked the hexdump and the wannabe NULL are actually
2c 22 22 2c - ,,

all fields are varchar that admit NULL

but still I get empty strings ('') in spite of NULL.

Am I missing something or it is a well known feature?


I went to the docs for COPY and they say,

The default is \N (backslash-N) in text mode, and a empty value with no quotes 
in CSV mode.


That with no quotes phrase caught my attention.

Try eliminating the double quotes in the CSV file.  Wannabe NULL would then be
  ,, (consecutive commas)
in the CSV.  From the docs, you don't even need the NULL AS clause in your 
COPY statement.


--
Lew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] \copy ... with null as '' csv doesn't get nulls

2007-11-27 Thread Ivan Sergio Borgonovo
On Sun, 25 Nov 2007 13:22:48 -0500
Lew [EMAIL PROTECTED] wrote:

 I went to the docs for COPY and they say,
  The default is \N (backslash-N) in text mode, and a empty value
  with no quotes in CSV mode.

 That with no quotes phrase caught my attention.

 Try eliminating the double quotes in the CSV file.  Wannabe NULL
 would then be ,, (consecutive commas)
 in the CSV.  From the docs, you don't even need the NULL AS
 clause in your COPY statement.

Exactly what I did because fortunately there weren't too many chances
of weird stuff in 2000 records (sed -e 's/,/,/g').

Anyway with NULL AS '' and without it I can't still import NULL
without pre-processing.

I thought it may be missing total support of UTF-8 or if I did
something wrong or it is actually a feature.

thx

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Copy the database..

2007-11-06 Thread Sascha Bohnenkamp
if you dump/undump using a pipe there is no temp file ...

Abandoned schrieb:
 On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
 maybe pg_dump | pg_undump could work?

 I mean pg_dump with the appropriate parameters and undump directly to
 the other database?

 This may one of the fastest ways to do it I think.

 Abandoned schrieb:

 Hi..
 I want to copy my database..
 I have a database which is name db01 and i want to copy it as name
 db01copy ?
 How can i do this ?
 This database is very big 200GB so i want to the fastest way. Also if
 you suggest pg_dump how can i remove the dump data after copying ?
 Note: I have a root account with ssh if it is necessary i use ssh.
 I'm sorry for my bad english.
 Kind regards...
 
 Thank you..
 Are there any way to copy database without dump or any temp files ?
 (If there is a temp my harddisk not enough for this operation :(  )
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Copy the database..

2007-11-06 Thread Sascha Bohnenkamp
maybe pg_dump | pg_undump could work?

I mean pg_dump with the appropriate parameters and undump directly to
the other database?

This may one of the fastest ways to do it I think.


Abandoned schrieb:
 Hi..
 I want to copy my database..
 I have a database which is name db01 and i want to copy it as name
 db01copy ?
 How can i do this ?
 This database is very big 200GB so i want to the fastest way. Also if
 you suggest pg_dump how can i remove the dump data after copying ?
 Note: I have a root account with ssh if it is necessary i use ssh.
 I'm sorry for my bad english.
 Kind regards...
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Copy the database..

2007-11-06 Thread Abandoned
On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
 maybe pg_dump | pg_undump could work?

 I mean pg_dump with the appropriate parameters and undump directly to
 the other database?

 This may one of the fastest ways to do it I think.

 Abandoned schrieb:

  Hi..
  I want to copy my database..
  I have a database which is name db01 and i want to copy it as name
  db01copy ?
  How can i do this ?
  This database is very big 200GB so i want to the fastest way. Also if
  you suggest pg_dump how can i remove the dump data after copying ?
  Note: I have a root account with ssh if it is necessary i use ssh.
  I'm sorry for my bad english.
  Kind regards...

Thank you..
Are there any way to copy database without dump or any temp files ?
(If there is a temp my harddisk not enough for this operation :(  )


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Copy the database..

2007-11-06 Thread Abandoned
On Nov 2, 5:30 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
 if you dump/undump using a pipe there is no temp file ...

 Abandoned schrieb:



  On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
  maybe pg_dump | pg_undump could work?

  I mean pg_dump with the appropriate parameters and undump directly to
  the other database?

  This may one of the fastest ways to do it I think.

  Abandoned schrieb:

  Hi..
  I want to copy my database..
  I have a database which is name db01 and i want to copy it as name
  db01copy ?
  How can i do this ?
  This database is very big 200GB so i want to the fastest way. Also if
  you suggest pg_dump how can i remove the dump data after copying ?
  Note: I have a root account with ssh if it is necessary i use ssh.
  I'm sorry for my bad english.
  Kind regards...

  Thank you..
  Are there any way to copy database without dump or any temp files ?
  (If there is a temp my harddisk not enough for this operation :(  )- Hide 
  quoted text -

 - Show quoted text -

I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Copy the database..

2007-11-06 Thread Martijn van Oosterhout
On Sat, Nov 03, 2007 at 01:42:09PM -, Abandoned wrote:
 I tryed pg_dump but it is very slowly. Are there any faster way to
 copy database?

Have you tried CREATE DATABASE .. TEMPLATE ? (See amual for syntax)

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Copy the database..

2007-11-06 Thread Scott Marlowe
On 11/3/07, Abandoned [EMAIL PROTECTED] wrote:

 I tryed pg_dump but it is very slowly. Are there any faster way to
 copy database?

Assuming it's all happening on the same db server, yes:

psql template1
create database newdb template olddb

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Copy the database..

2007-11-05 Thread Rainer Bauer
Tom Lane wrote:

Rainer Bauer [EMAIL PROTECTED] writes:
 Wouldn't it be possible to copy the database folder and somehow instruct the
 postmaster to include the copied data after a restart?

See CREATE DATABASE's TEMPLATE option.  It's a bit crude but I think
it'll help.

Thanks, Tom. Works like a charm.

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] COPY ... FROM and index usage

2007-11-05 Thread Lincoln Yeoh

Hi,

Anyone have comparisons/benchmarks to give some 
idea of the potential performance gains?


Say compared to doing the stuff here:
http://www.postgresql.org/docs/8.2/static/populate.html

Regards,
Link.

At 09:35 AM 11/5/2007, Toru SHIMOGAKI wrote:

Dimitri, thank you for your quoting. I'm a pg_bulkload author.

pg_bulkload is optimized especially for appending data to table with indexes.
If you use it, you don't need to drop index 
before loading data. But you have to consider 
conditions carefully as Dimitri said below. See also pg_bulkload README:


http://pgfoundry.org/docman/view.php/1000261/473/README.pg_bulkload-2.2.0.txt

Best regards,

Dimitri Fontaine wrote:

Hi,
Le Sunday 04 November 2007 11:22:19 Reg Me Please, vous avez écrit :

That is, should I drop all indexes during a COPY ... FROM in order to
gain the maximum speed to load data?
When looking for a way to speed up data 
loading, you may want to consider pgbulkload, a 
project which optimizes index creation while 
loading data, and bypass constraints: it's 
useful when you know you trust input.

As I've never used it myself, I can only provides following links:
  http://pgfoundry.org/projects/pgbulkload
  http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf
Regards,


--
Toru SHIMOGAKI[EMAIL PROTECTED]
NTT Open Source Software Center


---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Copy the database..

2007-11-04 Thread Julio Cesar Sánchez González

El dom, 04-11-2007 a las 02:16 +0100, Rainer Bauer escribió:
 Abandoned wrote:
 
 I tryed pg_dump but it is very slowly. Are there any faster way to
 copy database?
 
 Actually, I was looking for something along the same line.
 
 I often want to test some functionality in my program based on the same
 dataset. However, dump/restore takes too long to be of any use.
 
 Wouldn't it be possible to copy the database folder and somehow instruct the
 postmaster to include the copied data after a restart?
 
 Rainer
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

Hi for all,

Try with de PITR
(http://www.postgresql.org/docs/8.1/static/backup-online.html), should
be help you...


-- 
Regards,

Julio Cesar Sánchez González.

--
Ahora me he convertido en la muerte, destructora de mundos.
Soy la Muerte que se lleva todo, la fuente de las cosas que vendran.

www.sistemasyconectividad.com.mxhttp://darkavngr.blogspot.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Josh Tolley
On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
 Hi all.

 I'd like to know whether the indexes on a table are updated or not during
 a COPY ... FROM request.

 That is, should I drop all indexes during a COPY ... FROM in order to gain
 the maximum speed to load data?

 Thanks.

Although questions of which is faster often depend very heavily on
the data involved, the database schema, the hardware, etc., typically
people find it best to drop all indexes during a large import and
recreate them afterward.

- Josh/eggyknap

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Reg Me Please
Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:
 On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
  Hi all.
 
  I'd like to know whether the indexes on a table are updated or not during
  a COPY ... FROM request.
 
  That is, should I drop all indexes during a COPY ... FROM in order to
  gain the maximum speed to load data?
 
  Thanks.

 Although questions of which is faster often depend very heavily on
 the data involved, the database schema, the hardware, etc., typically
 people find it best to drop all indexes during a large import and
 recreate them afterward.

 - Josh/eggyknap

This sounds very reasonable to me.
But the first question remains unanswered:

Are the indexes updated during the COPY ... FROM ?

Thanks again.

-- 
Reg me Please

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Erik Jones

On Nov 4, 2007, at 9:15 AM, Reg Me Please wrote:


Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:

On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:

Hi all.

I'd like to know whether the indexes on a table are updated or  
not during

a COPY ... FROM request.

That is, should I drop all indexes during a COPY ... FROM in  
order to

gain the maximum speed to load data?

Thanks.


Although questions of which is faster often depend very heavily on
the data involved, the database schema, the hardware, etc., typically
people find it best to drop all indexes during a large import and
recreate them afterward.

- Josh/eggyknap


This sounds very reasonable to me.
But the first question remains unanswered:

Are the indexes updated during the COPY ... FROM ?


Of course.  Why would think that data could be inserted into a table  
by any means without it updating the table's indexes?  That would  
make the index worthless.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Reg Me Please
Il Sunday 04 November 2007 16:21:41 Erik Jones ha scritto:
 On Nov 4, 2007, at 9:15 AM, Reg Me Please wrote:
  Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:
  On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
  Hi all.
 
  I'd like to know whether the indexes on a table are updated or
  not during
  a COPY ... FROM request.
 
  That is, should I drop all indexes during a COPY ... FROM in
  order to
  gain the maximum speed to load data?
 
  Thanks.
 
  Although questions of which is faster often depend very heavily on
  the data involved, the database schema, the hardware, etc., typically
  people find it best to drop all indexes during a large import and
  recreate them afterward.
 
  - Josh/eggyknap
 
  This sounds very reasonable to me.
  But the first question remains unanswered:
 
  Are the indexes updated during the COPY ... FROM ?

 Of course.  Why would think that data could be inserted into a table
 by any means without it updating the table's indexes?  That would
 make the index worthless.

 Erik Jones

 Software Developer | Emma®
 [EMAIL PROTECTED]
 800.595.4401 or 615.292.5888
 615.292.0777 (fax)

 Emma helps organizations everywhere communicate  market in style.
 Visit us online at http://www.myemma.com

I do understand your remarks and would agree.
But I was thinking about the COPY...FROM request not
as a normal INSERT INTO.
If this were the case I could have been running a REINDEX TABLE.

-- 
Reg me Please

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Tom Lane
Josh Tolley [EMAIL PROTECTED] writes:
 Although questions of which is faster often depend very heavily on
 the data involved, the database schema, the hardware, etc., typically
 people find it best to drop all indexes during a large import and
 recreate them afterward.

See also the extensive discussion of this topic at

http://www.postgresql.org/docs/8.2/static/populate.html

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Dimitri Fontaine
Hi,

Le Sunday 04 November 2007 11:22:19 Reg Me Please, vous avez écrit :
 That is, should I drop all indexes during a COPY ... FROM in order to
 gain the maximum speed to load data?

When looking for a way to speed up data loading, you may want to consider 
pgbulkload, a project which optimizes index creation while loading data, and 
bypass constraints: it's useful when you know you trust input.
As I've never used it myself, I can only provides following links:

  http://pgfoundry.org/projects/pgbulkload
  http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf

Regards,
-- 
dim

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] COPY ... FROM and index usage

2007-11-04 Thread Toru SHIMOGAKI

Dimitri, thank you for your quoting. I'm a pg_bulkload author.

pg_bulkload is optimized especially for appending data to table with indexes.
If you use it, you don't need to drop index before loading data. But you have to 
consider conditions carefully as Dimitri said below. See also pg_bulkload README:


http://pgfoundry.org/docman/view.php/1000261/473/README.pg_bulkload-2.2.0.txt

Best regards,

Dimitri Fontaine wrote:

Hi,

Le Sunday 04 November 2007 11:22:19 Reg Me Please, vous avez écrit :

That is, should I drop all indexes during a COPY ... FROM in order to
gain the maximum speed to load data?


When looking for a way to speed up data loading, you may want to consider 
pgbulkload, a project which optimizes index creation while loading data, and 
bypass constraints: it's useful when you know you trust input.

As I've never used it myself, I can only provides following links:

  http://pgfoundry.org/projects/pgbulkload
  http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf

Regards,


--
Toru SHIMOGAKI[EMAIL PROTECTED]
NTT Open Source Software Center


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Copy the database..

2007-11-03 Thread Rainer Bauer
Abandoned wrote:

I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?

Actually, I was looking for something along the same line.

I often want to test some functionality in my program based on the same
dataset. However, dump/restore takes too long to be of any use.

Wouldn't it be possible to copy the database folder and somehow instruct the
postmaster to include the copied data after a restart?

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Copy the database..

2007-11-03 Thread Tom Lane
Rainer Bauer [EMAIL PROTECTED] writes:
 Wouldn't it be possible to copy the database folder and somehow instruct the
 postmaster to include the copied data after a restart?

See CREATE DATABASE's TEMPLATE option.  It's a bit crude but I think
it'll help.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] COPY for .CSV files problem

2007-10-01 Thread Jorge Godoy
On Monday 01 October 2007 05:20:52 pere roca wrote:
   Hi everybody,
   I want to enter a .CSV file using COPY comand and plpgsql. It enters
 lat,lon and some data. In the CSV data there is no field (such as
 user_name or current_time) that allow distinguish future queries for
 different users (ex: select x,y from table where user_name=z; after
 entering lat,lon I would like to update the_geom in  another table using
 the lat,lon of ONLY this user).

   I could update this kind of data (user_name) using php but next user
 could not enter his data by COPY (it only lets fill the destination table
 if ALL the fields are the same as the CSV or text file).
   So, I have a problem. I thought that may be I can reference/select data
 of a SINGLE USER using parameters like default_time without need to insert
 this data as a field in the table (otherwise, problems with COPY). Maybe
 can I use catalog?

COPY won't allow you to process your data.  You either write something to 
insert record by record or you change your data to include the needed 
information before using COPY.

I would change the input file to include the user name.


-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] \copy only select rows

2007-08-30 Thread A. Kretschmer
am  Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
 Is there a way to do a dump of a database using a select statement?

A complete database or just a simple table?


 
 eg: \copy trd to 'file' select * from table limit 10

Since 8.2 you can use COPY (select * from table) TO 'file'.

Other solution, use a regular UNIX-Shell and psql, a la 

echo select * from table limit 10; | psql database  file


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] \copy only select rows

2007-08-30 Thread Ow Mun Heng
On Thu, 2007-08-30 at 09:14 +0200, A. Kretschmer wrote:
 am  Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
  Is there a way to do a dump of a database using a select statement?
 
 A complete database or just a simple table?

a simple table.. couple million records, want some of them for testing..
say ~100pcs

(right now, using insert into foo select * fromt able limit 10)

 
  
  eg: \copy trd to 'file' select * from table limit 10
 
 Since 8.2 you can use COPY (select * from table) TO 'file'.

superuser priviledge only right?

 Other solution, use a regular UNIX-Shell and psql, a la 
 
 echo select * from table limit 10; | psql database  file

I want them to be in comma delimited format.. will the output be
importable again using \copy into pg?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] copy command - date

2007-08-17 Thread Raj A
Thanks again guys =)
I've managed to use temp table to load the data and create new table/s
Now, how do I convert a text field with 'YY/MM/DD' to date field 'DD/MM/YY'?

On 13/08/07, Tom Lane [EMAIL PROTECTED] wrote:
 Paul Lambert [EMAIL PROTECTED] writes:
  novice wrote:
  db5=  \copy maintenance FROM test.txt

  I don't think copy allows you to leave columns out of your input file -
  even if they belong to a sequence.

 Well, it does, but you have to specify which ones are being provided,
 eg \copy tab(col1,col4,col7, ...

 But the long and the short of it is that COPY doesn't see any column
 delimiters at all in this file.  We're guessing as to what the OP
 intends the columns to be, but whatever he wants, he needs something
 other than an uncertain number of spaces to separate them ...

 regards, tom lane

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Copy command and duplicate items (Support Replace?)

2007-08-15 Thread Decibel!
On Tue, Aug 14, 2007 at 10:50:33AM +0800, Ow Mun Heng wrote:
 Hi,
 
 Writing a script to pull data from SQL server into a flat-file (or just
 piped in directly to PG using Perl DBI)
 
 Just wondering if the copy command is able to do a replace if there are
 existing data in the Db already. (This is usually in the case of updates
 to specific rows and there be a timestamp indicating it has been changed
 etc.)
 
 In MySQL, the mysqlimport util has the --replace function which will
 replace the data if there is any event of a duplicate.
 
 Does PG support this?

No; you'll need to COPY into a temporary or staging table and then
proceed from there. Alternatively, you could use
http://pgfoundry.org/projects/pgloader/.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgpyQNuNDvD9l.pgp
Description: PGP signature


Re: [GENERAL] copy command - date

2007-08-13 Thread Scott Marlowe
On 8/12/07, novice [EMAIL PROTECTED] wrote:
 I resolved it by doing this - is there another more efficient method?
 And yes, the text file I am working with doesn't have any TABs

 5162   OK   SM 06/12/04 06:12

 substr(data, 30, 2)||'-'||substr(data, 27,
 2)||'-20'||substr(data, 24, 2)||substr(data, 32, 6) as
 inspection_date

I didn't have to do anything special, just copied it in:

create table g (ts timestamp);
set datestyle=ISO, MDY;
copy g (ts) from stdin;
06/12/04 12:00:00
\.
select * from g;
 ts
-
 2004-06-12 12:00:00
delete from g;
set datestyle=ISO, DMY;
copy g (ts) from stdin;
06/12/04 12:00:00
\.
 select * from g;
 ts
-
 2004-12-06 12:00:00

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] copy command - date

2007-08-12 Thread Tom Lane
novice [EMAIL PROTECTED] writes:
 I'm having trouble loading the date field.  Should I convert it first
 or should I be using a text processor before loading the data in?

 3665   OK   SM 07/07/13 06:09
 5162   OK   SM 07/02/12 06:10
 3665   OK   SM 07/06/19 06:10

What sort of trouble, exactly?

I'm guessing that you might need to set DateStyle to tell Postgres what
the date field ordering is, but without seeing any error messages that's
strictly a guess.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] copy command - date

2007-08-12 Thread novice
I'm using pg version 8.2.4.  What is the best method to load this data?
I have just a little over 55,000 entries.

db5=  \copy maintenance FROM test.txt
ERROR:  invalid input syntax for integer: 3665   OK   SM
07/07/13 06:09
CONTEXT:  COPY maintenance, line 1, column maintenance_id: 3665   OK
 SM 07/07/13 06:09

  Table pm.maintenance
 Column  |   Type   |
Modifiers
-+--+--
 maintenance_id  | integer  | not null default
nextval('maintenance_maintenance_id_seq'::regclass)
 meter_id| integer  |
 status  | character(3) |
 inspector   | character(2) |
 inspection_date | timestamp with time zone |



On 13/08/07, Tom Lane [EMAIL PROTECTED] wrote:
 novice [EMAIL PROTECTED] writes:
  I'm having trouble loading the date field.  Should I convert it first
  or should I be using a text processor before loading the data in?

  3665   OK   SM 07/07/13 06:09
  5162   OK   SM 07/02/12 06:10
  3665   OK   SM 07/06/19 06:10

 What sort of trouble, exactly?

 I'm guessing that you might need to set DateStyle to tell Postgres what
 the date field ordering is, but without seeing any error messages that's
 strictly a guess.

 regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


<    1   2   3   4   5   6   7   >