Re: [GENERAL] COPY vs \COPY FROM PROGRAM $$ quoting difference?

2017-09-29 Thread Alexander Stoddard
On Fri, Sep 29, 2017 at 11:54 AM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Fri, Sep 29, 2017 at 9:27 AM, Alexander Stoddard < > alexander.stodd...@gmail.com> wrote: > >> I found what seems to be an odd difference between COPY and \copy parsing. >> > ​[...] > ​ > > >> COPY

Re: [GENERAL] COPY vs \COPY FROM PROGRAM $$ quoting difference?

2017-09-29 Thread David G. Johnston
On Fri, Sep 29, 2017 at 9:27 AM, Alexander Stoddard < alexander.stodd...@gmail.com> wrote: > I found what seems to be an odd difference between COPY and \copy parsing. > ​[...] ​ > COPY dest_table FROM PROGRAM $$ sed 's/x/y/' | etc... $$ > > To my surprise this worked with COPY but not \COPY

Re: [GENERAL] COPY: row is too big

2017-05-27 Thread doganmeh
Yes, csvkit is what I decided to go with. Thank you all! -- View this message in context: http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963559.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list

Re: [GENERAL] COPY: row is too big

2017-05-27 Thread doganmeh
Yes, the delimiter was indeed ",". I fixed my original post . Seems I carelessly copy/pasted from excel. -- View this message in context: http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963558.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. --

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread Tom Lane
doganmeh writes: > I tried varchar(12) also, nothing changed. My questions is 1) I have > 672x12=8,064 characters in the first row (which are actually the headers), > why would it complain that it is 8760. No, you have 672*13, because each varchar value will require a length

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread Andreas Kretschmer
Am 26.05.2017 um 14:07 schrieb doganmeh: I tried varchar(12) also, nothing changed. My questions is 1) I have 672x12=8,064 characters in the first row (which are actually the headers), why would it complain that it is 8760. I am assuming here type `text` occupies 1 byte for a character.

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread Charles Clavadetscher
Hello > -Original Message- > From: pgsql-general-ow...@postgresql.org > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of doganmeh > Sent: Freitag, 26. Mai 2017 14:08 > To: pgsql-general@postgresql.org > Subject: Re: [GENERAL] COPY: row is too big >

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread Adrian Klaver
On 05/26/2017 05:07 AM, doganmeh wrote: I am piggy-backing in this thread because I have the same issue as well. I need to import a csv file that is 672 columns long and each column consists of 12 alpha-numeric characters. Such as: SA03ARE1015DSA03ARE1S15NSB03ARE1015D ... 356412

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread doganmeh
BTW, we have pg9.5 run on ubuntu. -- View this message in context: http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963386.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To

Re: [GENERAL] COPY: row is too big

2017-05-26 Thread doganmeh
I am piggy-backing in this thread because I have the same issue as well. I need to import a csv file that is 672 columns long and each column consists of 12 alpha-numeric characters. Such as: SA03ARE1015DSA03ARE1S15NSB03ARE1015D ... 356412 275812 43106 ... I am aware

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-22 Thread Alexander Farber
Hi David, On Tue, Mar 21, 2017 at 8:21 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > > On Tuesday, March 21, 2017, Alexander Farber wrote: >> >> words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin FORMAT csv; > > > What did you

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 1:45 PM, Adrian Klaver wrote: > On 03/21/2017 12:11 PM, Alexander Farber wrote: > >> Thank you - this has worked: >> >> COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH >> (FORMAT csv); >> 1,2,1,'1 is nice by

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Adrian Klaver
On 03/21/2017 12:11 PM, Alexander Farber wrote: Thank you - this has worked: COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH (FORMAT csv); 1,2,1,'1 is nice by 2','2017-03-01' 1,3,1,'1 is nice by 3','2017-03-02' 1,4,1,'1 is nice by 4','2017-03-03' 2,1,1,'2 is nice by

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 12:45 PM, Paul Jungwirth < p...@illuminatedcomputing.com> wrote: > On 03/21/2017 12:21 PM, David G. Johnston wrote: > >> > words=> COPY words_reviews (uid, author, nice, review, updated) FROM >> > stdin FORMAT csv; >> >> What did you read that lead you to think the

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Paul Jungwirth
On 03/21/2017 12:21 PM, David G. Johnston wrote: > words=> COPY words_reviews (uid, author, nice, review, updated) FROM > stdin FORMAT csv; What did you read that lead you to think the above shoud work? I don't know about COPY FROM, but COPY TO works without parens (or FORMAT), like

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tuesday, March 21, 2017, Alexander Farber wrote: > > words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin > FORMAT csv; > What did you read that lead you to think the above shoud work? David J.

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Alexander Farber
Thank you - this has worked: COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH (FORMAT csv); 1,2,1,'1 is nice by 2','2017-03-01' 1,3,1,'1 is nice by 3','2017-03-02' 1,4,1,'1 is nice by 4','2017-03-03' 2,1,1,'2 is nice by 1','2017-03-01' 2,3,1,'2 is nice by 3','2017-03-02'

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread Francisco Olarte
Alexander: On Tue, Mar 21, 2017 at 6:31 PM, Alexander Farber wrote: > I keep rereading https://www.postgresql.org/docs/9.6/static/sql-copy.html > but just can't figure the proper syntax to put some records into the table: It's not that complex, let's see >

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread John R Pierce
On 3/21/2017 10:31 AM, Alexander Farber wrote: words=> COPY words_reviews (uid, author, nice, review, updated) FROM stdin WITH FORMAT 'csv'; ERROR: syntax error at or near "FORMAT" LINE 1: ...d, author, nice, review, updated) FROM stdin WITH FORMAT 'cs... its just csv, not 'csv' ... And I

Re: [GENERAL] COPY ... FROM stdin WITH FORMAT csv

2017-03-21 Thread David G. Johnston
On Tue, Mar 21, 2017 at 10:31 AM, Alexander Farber < alexander.far...@gmail.com> wrote: > Good evening, > > I keep rereading https://www.postgresql.org/docs/9.6/static/sql-copy.html > but just can't figure the proper syntax to put some records into the table: > ​[...]​ > > words=> COPY

Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-09 Thread Panagiotis Atmatzidis
Thanks for the replies, pg_dump —exclude-table will do for now. Panagiotis (atmosx) Atmatzidis email: a...@convalesco.org URL:http://www.convalesco.org GnuPG ID: 0x1A7BFEC5 gpg --keyserver pgp.mit.edu --recv-keys 1A7BFEC5 "Everyone thinks of changing the world, but no one thinks of

Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-07 Thread Vick Khera
On Tue, Mar 7, 2017 at 2:02 AM, Panagiotis Atmatzidis wrote: > I want to make a clone of database1 which belongs to user1, to database2 > which belongs to user2. Database1 has 20+ tables. I want to avoid copying > the DATA sitting on 5 tables on database1 (many Gigs). > >

Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-07 Thread Achilleas Mantzios
On 07/03/2017 09:02, Panagiotis Atmatzidis wrote: Hello, I have 2 RDS instances on AWS running PSQL 9.4.7. I want to make a clone of database1 which belongs to user1, to database2 which belongs to user2. Database1 has 20+ tables. I want to avoid copying the DATA sitting on 5 tables on

Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-06 Thread Condor
On 07-03-2017 09:02, Panagiotis Atmatzidis wrote: Hello, I have 2 RDS instances on AWS running PSQL 9.4.7. I want to make a clone of database1 which belongs to user1, to database2 which belongs to user2. Database1 has 20+ tables. I want to avoid copying the DATA sitting on 5 tables on

Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-06 Thread Condor
On 07-03-2017 09:02, Panagiotis Atmatzidis wrote: Hello, I have 2 RDS instances on AWS running PSQL 9.4.7. I want to make a clone of database1 which belongs to user1, to database2 which belongs to user2. Database1 has 20+ tables. I want to avoid copying the DATA sitting on 5 tables on

Re: [GENERAL] COPY to question

2017-01-17 Thread Steve Crawford
On Tue, Jan 17, 2017 at 10:23 AM, Rich Shepard wrote: > Running -9.6.1. I have a database created and owned by me, but cannot > copy > a table to my home directory. Postgres tells me it cannot write to that > directory. The only way to copy tables to files is by doing

Re: [GENERAL] COPY to question

2017-01-17 Thread David G. Johnston
On Tue, Jan 17, 2017 at 11:23 AM, Rich Shepard wrote: > Running -9.6.1. I have a database created and owned by me, but cannot > copy > a table to my home directory. Postgres tells me it cannot write to that > directory. The only way to copy tables to files is by doing

Re: [GENERAL] COPY to question

2017-01-17 Thread Steve Atkins
> On Jan 17, 2017, at 10:23 AM, Rich Shepard wrote: > > Running -9.6.1. I have a database created and owned by me, but cannot copy > a table to my home directory. Postgres tells me it cannot write to that > directory. The only way to copy tables to files is by doing

Re: [GENERAL] COPY to question

2017-01-17 Thread Pavel Stehule
2017-01-17 19:23 GMT+01:00 Rich Shepard : > Running -9.6.1. I have a database created and owned by me, but cannot > copy > a table to my home directory. Postgres tells me it cannot write to that > directory. The only way to copy tables to files is by doing so as the >

Re: [GENERAL] COPY to question [ANSWERED]

2017-01-17 Thread Rich Shepard
On Tue, 17 Jan 2017, Tom Lane wrote: Use psql's \copy instead. Thanks, Tom. Rich -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] COPY to question

2017-01-17 Thread Tom Lane
Rich Shepard writes: >Running -9.6.1. I have a database created and owned by me, but cannot copy > a table to my home directory. Postgres tells me it cannot write to that > directory. The only way to copy tables to files is by doing so as the > superuser (postgres).

Re: [GENERAL] COPY value TO STDOUT

2017-01-14 Thread Tom Lane
Denisa Cirstescu writes: > I want to COPY a value to STDOUT from PL/pgSQL language. You can't. Maybe RAISE NOTICE would serve the purpose? > I saw that the STDOUT is not accessible from PL/pgSQL, but it is from SQL. > This is why I am trying to create an

Re: [GENERAL] COPY value TO STDOUT

2017-01-14 Thread Pavel Stehule
Hi 2017-01-13 16:45 GMT+01:00 Denisa Cirstescu : > I am not sure if this is the correct mailing list or if this is how you > submit a question, but I am going to give it a try. > > > > I want to COPY a value to STDOUT from PL/pgSQL language. > > > > I saw that

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Rob Sargent
On 01/05/2017 11:46 AM, Adrian Klaver wrote: On 01/05/2017 08:31 AM, Rob Sargent wrote: On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Adrian Klaver
On 01/05/2017 08:31 AM, Rob Sargent wrote: On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Rob Sargent
On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so 1100 culumns work well now. This problem

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Adrian Klaver
On 01/05/2017 04:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. Did you change the NULLs to something else? As

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Pavel Stehule
2017-01-05 13:44 GMT+01:00 vod vos : > I finally figured it out as follows: > > 1. modified the corresponding data type of the columns to the csv file > > 2. if null values existed, defined the data type to varchar. The null > values cause problem too. > int, float, double can

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread vod vos
I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so 1100 culumns work well now. This problem wasted me three days. I have lots of

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Adrian Klaver
On 01/04/2017 08:32 AM, Steve Crawford wrote: ... Numeric is expensive type - try to use float instead, maybe double. If I am following the OP correctly the table itself has all the columns declared as varchar. The data in the CSV file is a mix of text, date and numeric,

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Steve Crawford
... > Numeric is expensive type - try to use float instead, maybe double. >> > > If I am following the OP correctly the table itself has all the columns > declared as varchar. The data in the CSV file is a mix of text, date and > numeric, presumably cast to text on entry into the table. > But a

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Adrian Klaver
On 01/04/2017 08:00 AM, rob stone wrote: Hello, On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote: On 01/04/2017 06:54 AM, Pavel Stehule wrote: Hi 2017-01-04 14:00 GMT+01:00 vod vos >: __ Now I am confused about I can create 1100 columns

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Peter J. Holzer
On 2017-01-04 06:53:31 -0800, Adrian Klaver wrote: > On 01/04/2017 05:00 AM, vod vos wrote: > >Now I am confused about I can create 1100 columns in a table in > >postgresql, but I can't copy 1100 values into the table. And I really > > As pointed out previously: > >

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread rob stone
Hello, On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote: > On 01/04/2017 06:54 AM, Pavel Stehule wrote: > > Hi > > > > 2017-01-04 14:00 GMT+01:00 vod vos > >: > > > > __ > > Now I am confused about I can create 1100 columns in a table in >

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Pavel Stehule
2017-01-04 16:11 GMT+01:00 Adrian Klaver : > On 01/04/2017 06:54 AM, Pavel Stehule wrote: > >> Hi >> >> 2017-01-04 14:00 GMT+01:00 vod vos > >: >> >> __ >> Now I am confused about I can create 1100 columns in a table in

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread vod vos
OK, maybe the final solution is to split it into half. On 星期三, 04 一月 2017 06:53:31 -0800 Adrian Klaver adrian.kla...@aklaver.com wrote On 01/04/2017 05:00 AM, vod vos wrote: Now I am confused about I can create 1100 columns in a table in postgresql, but I can't copy 1100

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Adrian Klaver
On 01/04/2017 06:54 AM, Pavel Stehule wrote: Hi 2017-01-04 14:00 GMT+01:00 vod vos >: __ Now I am confused about I can create 1100 columns in a table in postgresql, but I can't copy 1100 values into the table. And I really dont want to

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Pavel Stehule
Hi 2017-01-04 14:00 GMT+01:00 vod vos : > Now I am confused about I can create 1100 columns in a table in > postgresql, but I can't copy 1100 values into the table. And I really dont > want to split the csv file to pieces to avoid mistakes after this action. > The PostgreSQL

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread Adrian Klaver
On 01/04/2017 05:00 AM, vod vos wrote: Now I am confused about I can create 1100 columns in a table in postgresql, but I can't copy 1100 values into the table. And I really As pointed out previously: https://www.postgresql.org/about/ Maximum Columns per Table 250 - 1600 depending on

Re: [GENERAL] COPY: row is too big

2017-01-04 Thread vod vos
Now I am confused about I can create 1100 columns in a table in postgresql, but I can't copy 1100 values into the table. And I really dont want to split the csv file to pieces to avoid mistakes after this action. I create a table with 1100 columns with data type of varchar, and hope the COPY

Re: [GENERAL] COPY: row is too big

2017-01-03 Thread John McKown
On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent wrote: > Perhaps this is your opportunity to correct someone else's mistake. You > need to show the table definition to convince us that it cannot be > improved. That it may be hard work really doesn't mean it's not the right >

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread Rob Sargent
> On Jan 2, 2017, at 10:13 AM, Adrian Klaver wrote: > >> On 01/02/2017 09:03 AM, vod vos wrote: >> You know, the csv file was exported from other database of a machine, so >> I really dont want to break it for it is a hard work. Every csv file >> contains headers and

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread Adrian Klaver
On 01/02/2017 09:03 AM, vod vos wrote: > You know, the csv file was exported from other database of a machine, so > I really dont want to break it for it is a hard work. Every csv file > contains headers and values. If I redesign the table, then I have to cut > all the csv files into pieces one by

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread vod vos
You know, the csv file was exported from other database of a machine, so I really dont want to break it for it is a hard work. Every csv file contains headers and values. If I redesign the table, then I have to cut all the csv files into pieces one by one. On 星期一, 02 一月 2017 08:21:29

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread Tom Lane
vod vos writes: > When I copy data from csv file, a very long values for many columns (about > 1100 columns). The errors appears: > ERROR: row is too big: size 11808, maximum size 8160 You need to rethink your table schema so you have fewer columns. Perhaps you can combine

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread vod vos
The most of the data type are text or varhcar, and I use: COPY rius FROM "/var/www/test/aa.csv" WITH DELIMITER ';' ; And some the values in the csv file contain nulls, do this null values matter? Thanks. On 星期一, 02 一月 2017 03:11:14 -0800 vod vos vod...@zoho.com wrote

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread Adrian Klaver
On 01/02/2017 03:11 AM, vod vos wrote: Hi everyone, My postgresql is 9.61. When I copy data from csv file, a very long values for many columns (about 1100 columns). The errors appears: My guess is you are tripping this: https://www.postgresql.org/about/ Maximum Columns per Table 250 -

Re: [GENERAL] COPY: row is too big

2017-01-02 Thread John McKown
On Mon, Jan 2, 2017 at 5:11 AM, vod vos wrote: > Hi everyone, > > My postgresql is 9.61. > > When I copy data from csv file, a very long values for many columns (about > 1100 columns). The errors appears: > > > ERROR: row is too big: size 11808, maximum size 8160CONTEXT: > >

Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Pujol Mathieu
Le 10/05/2016 à 12:56, Nicolas Paris a écrit : Hello, What is the way to build a binary format (instead of a csv) ? Is there specification for this file ? http://www.postgresql.org/docs/9.5/static/sql-copy.html Could I create such format from java ? I guess this would be far faster, and

Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:36 PM Sameer Kumar wrote: > On Tue, May 10, 2016 at 4:26 PM Nicolas Paris wrote: > >> Hello, >> >> What is the way to build a binary format (instead of a csv) ? Is there >> specification for this file ? >>

Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:26 PM Nicolas Paris wrote: > Hello, > > What is the way to build a binary format (instead of a csv) ? Is there > specification for this file ? > http://www.postgresql.org/docs/9.5/static/sql-copy.html > > > Could I create such format from java ? >

Re: [GENERAL] COPY FROM STDIN

2016-01-10 Thread Jim Nasby
On 1/8/16 10:37 AM, Luke Coldiron wrote: On 1/6/16 9:45 PM, Luke Coldiron wrote: In the example above I'm not sure if I can use some sub struct of the SPIPlanPtr and hand it off to the DoCopy function as the CopyStmt or if I need to go about this entirely different. Any advice on the matter

Re: [GENERAL] COPY FROM STDIN

2016-01-08 Thread Luke Coldiron
> On 1/6/16 9:45 PM, Luke Coldiron wrote: > > In the example above I'm not sure if I can use some sub struct of the > > SPIPlanPtr and hand it off to the DoCopy function as the CopyStmt or > > if I need to go about this entirely different. Any advice on the > > matter would be much appreciated.

Re: [GENERAL] COPY FROM STDIN

2016-01-06 Thread Jim Nasby
On 1/6/16 9:45 PM, Luke Coldiron wrote: In the example above I'm not sure if I can use some sub struct of the SPIPlanPtr and hand it off to the DoCopy function as the CopyStmt or if I need to go about this entirely different. Any advice on the matter would be much appreciated. I don't know

Re: [GENERAL] COPY FROM STDIN

2016-01-06 Thread Luke Coldiron
> On 1/4/16 12:18 PM, Luke Coldiron wrote: > > Is there a way to achieve the performance of the COPY FROM STDIN command > > within a C extension function connected to the db connection that called > > the C function? I have text that I would like to receive as input to a C > > function that

Re: [GENERAL] COPY FROM STDIN

2016-01-05 Thread Jim Nasby
On 1/4/16 12:18 PM, Luke Coldiron wrote: Is there a way to achieve the performance of the COPY FROM STDIN command within a C extension function connected to the db connection that called the C function? I have text that I would like to receive as input to a C function that contains many COPY

Re: [GENERAL] COPY command file name encoding issue (UTF8/WIN1252)

2015-03-23 Thread Pujol Mathieu
Maybe a new option could be added to let caller specifies the file name encoding, it may know it because he create the source/destination file. I tried to give him a WIN1252 text by doing COPY test TO convert_from(convert_to('C:/tmp/é.bin','UTF8'),'WIN1252') WITH BINARY but this call is not

Re: [GENERAL] COPY command file name encoding issue (UTF8/WIN1252)

2015-03-23 Thread Albe Laurenz
Pujol Mathieu wrote: I have a problem using COPY command with a file name containing non ASCII characters. I use Postgres 9.3.5 x64 on a Windows 7. OS local encoding is WIN1252. My database is encoded in UTF8. I initiate client connection with libpq, connection encoding is set to UTF8. I

Re: [GENERAL] Copy Data between different databases

2015-03-05 Thread Tim Semmelhaack
] Gesendet: Mittwoch, 4. März 2015 15:48 An: Adrian Klaver Cc: Tim Semmelhaack; pgsql-general@postgresql.org Betreff: Re: [GENERAL] Copy Data between different databases Hi Adrian: On Wed, Mar 4, 2015 at 1:03 AM, Adrian Klaver adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote

Re: [GENERAL] Copy Data between different databases

2015-03-05 Thread Jim Nasby
On 3/3/15 8:18 AM, Tim Semmelhaack wrote: When I run a much simpler version of the query with the -c Select .. option it works. Because the sql-scripts are quite long, I don't to do it without the -f option. When you say quite long... are you trying to do multiple commands in q1 or q2? As in,

Re: [GENERAL] Copy Data between different databases

2015-03-04 Thread Francisco Olarte
Hi Adrian: On Wed, Mar 4, 2015 at 1:03 AM, Adrian Klaver adrian.kla...@aklaver.com wrote: ​As you pointed, my bet is in the -f case COPY FROM STDIN expects the data on the file ( otherwise pg_dumps would not work ), but your sugestion seems to have a problem of double redirection, let me

Re: [GENERAL] Copy Data between different databases

2015-03-03 Thread Adrian Klaver
On 03/03/2015 06:18 AM, Tim Semmelhaack wrote: Hi, I want to copy data between two servers (Version 9.1 and 9.4) I've tried psql -h host1 -U user1 -d db1 -f /q1.sql | psql -h host2 -U user2 -d db2 -f /q2.sql Both sql-scripts include the COPY (SELECT ...) TO STDOUT or COPY (SELECT ...) TO

Re: [GENERAL] Copy Data between different databases

2015-03-03 Thread Francisco Olarte
Hi Adrian: On Tue, Mar 3, 2015 at 4:44 PM, Adrian Klaver adrian.kla...@aklaver.com wrote: On 03/03/2015 06:18 AM, Tim Semmelhaack wrote: Hi, I want to copy data between two servers (Version 9.1 and 9.4) I've tried ​​ psql -h host1 -U user1 -d db1 -f /q1.sql | psql -h host2 -U user2 -d

Re: [GENERAL] Copy Data between different databases

2015-03-03 Thread Ryan King
Have you considered using dblink() or foreign data wrappers to transfer the data? You can do a select from source, insert into target using one of these methods. RC On Mar 3, 2015, at 12:09 PM, Francisco Olarte fola...@peoplecall.com wrote: Hi Adrian: On Tue, Mar 3, 2015 at 4:44 PM,

Re: [GENERAL] Copy Data between different databases

2015-03-03 Thread Adrian Klaver
On 03/03/2015 10:09 AM, Francisco Olarte wrote: Hi Adrian: On Tue, Mar 3, 2015 at 4:44 PM, Adrian Klaver adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote: On 03/03/2015 06:18 AM, Tim Semmelhaack wrote: Hi, I want to copy data between two servers (Version

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent
On 10/16/2014 10:33 AM, Steve Wampler wrote: Hi, This is with Postgresql 9.3.5. I'm looking at using a COPY command (via jdbc) to do bulk inserts into a table that includes a BIGSERIAL column. Is there a way to mark the data in that column so it gets assigned a new value on entry - akin

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Steve Wampler
On 10/16/2014 09:42 AM, Rob Sargent wrote: On 10/16/2014 10:33 AM, Steve Wampler wrote: This is with Postgresql 9.3.5. I'm looking at using a COPY command (via jdbc) to do bulk inserts into a table that includes a BIGSERIAL column. Is there a way to mark the data in that column so it gets

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread David G Johnston
Steve Wampler wrote Let me generalize the problem a bit: How can I specify that the default value of a column is to be used with a COPY command when some rows have values for that column and some don't? If you provide a value for a column, including NULL, the default expression is not

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent
On 10/16/2014 11:04 AM, Steve Wampler wrote: On 10/16/2014 09:42 AM, Rob Sargent wrote: On 10/16/2014 10:33 AM, Steve Wampler wrote: This is with Postgresql 9.3.5. I'm looking at using a COPY command (via jdbc) to do bulk inserts into a table that includes a BIGSERIAL column. Is there a

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent
On 10/16/2014 11:38 AM, David G Johnston wrote: Steve Wampler wrote Let me generalize the problem a bit: How can I specify that the default value of a column is to be used with a COPY command when some rows have values for that column and some don't? If you provide a value for a column,

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread David G Johnston
On Thu, Oct 16, 2014 at 11:44 AM, lup [via PostgreSQL] ml-node+s1045698n5823292...@n5.nabble.com wrote: I appreciate the vastness of bigserial but I think it starts at 1. Are negative numbers even allowed? ​http://www.postgresql.org/docs/9.3/static/sql-createsequence.html A DEFAULT

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Rob Sargent
On 10/16/2014 11:52 AM, David G Johnston wrote: On Thu, Oct 16, 2014 at 11:44 AM, lup [via PostgreSQL] [hidden email] /user/SendEmail.jtp?type=nodenode=5823296i=0wrote: I appreciate the vastness of bigserial but I think it starts at 1. Are negative numbers even allowed?

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Steve Wampler
On 10/16/2014 10:44 AM, Rob Sargent wrote: On 10/16/2014 11:38 AM, David G Johnston wrote: COPY is dumb but fast. If you need logic you need to add it yourself. Either before the copy or copy into a temporary UNLOGGED table and write smart SQL to migrate from that to the live table. You can

Re: [GENERAL] COPY data into a table with a SERIAL column?

2014-10-16 Thread Adrian Klaver
On 10/16/2014 11:17 AM, Steve Wampler wrote: On 10/16/2014 10:44 AM, Rob Sargent wrote: On 10/16/2014 11:38 AM, David G Johnston wrote: COPY is dumb but fast. If you need logic you need to add it yourself. Either before the copy or copy into a temporary UNLOGGED table and write smart SQL to

Re: [GENERAL] copy/dump database to text/csv files

2014-07-25 Thread Francisco Olarte
Hi William: On Thu, Jul 24, 2014 at 9:04 PM, William Nolf bn...@xceleratesolutions.com wrote: We have a postgres database that was used for an application we no longer use. However, we wouldlike to copy/dump the tables to files, text or csv so we can post them to sharepoint. How BIG

Re: [GENERAL] copy/dump database to text/csv files

2014-07-25 Thread Marc Mamin
This is probably an easy one for most sql users but I don't use it very often. We have a postgres database that was used for an application we no longer use. However, we would like to copy/dump the tables to files, text or csv so we can post them to sharepoint. Copy seems to be what I

Re: [GENERAL] copy/dump database to text/csv files

2014-07-24 Thread John R Pierce
On 7/24/2014 12:04 PM, William Nolf wrote: This is probably an easy one for most sql users but I don't use it very often. We have a postgres database that was used for an application we no longer use. However, we would like to copy/dump the tables to files, text or csv so we can post

Re: [GENERAL] copy/dump database to text/csv files

2014-07-24 Thread Thomas Kellerer
William Nolf wrote on 24.07.2014 21:04: This is probably an easy one for most sql users but I don't use it very often. We have a postgres database that was used for an application we no longer use. However, we would like to copy/dump the tables to files, text or csv so we can post them to

Re: [GENERAL] \COPY from CSV ERROR: unterminated CSV quoted field

2014-06-19 Thread Tom Lane
ogromm alex.schiller1...@web.de writes: I get the error unterminated CSV quoted field when I try to copy text with new line \. new line For example: CREATE TABLE test (text TEXT); \COPY test FROM 'test.csv' WITH DELIMITER ',' CSV HEADER; test.csv: Text some text \. more text Yeah,

Re: [GENERAL] COPY TO returns ERROR: could not open file for writing: No such file or directory

2014-05-24 Thread Alban Hertroys
On 24 May 2014, at 8:21, David Noel david.i.n...@gmail.com wrote: COPY (SELECT * FROM page WHERE PublishDate between '2014-03-01' and '2014-04-01') TO '/home/ygg/sql/backup/pagedump.2014-03-01.to.2014-04-01.copy'; gives me: ERROR: could not open file

Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread bricklen
On Tue, May 6, 2014 at 5:52 AM, Marcus Engene meng...@engene.se wrote: Hi, I have a local db behind a firewall etc. Basically, I'd like to do what I'd locally would... create table abc as select * from local_expensive_view; abc - on RDS local_expensive_view - on local

Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Marcus Engene
On 06/05/14 16:58, bricklen wrote: A very quick search shows that rds supports dblink, so perhaps that would work. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html Then I'd need to open our servers to external visits. It would be lovely if dblink_exec could push a

Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread bricklen
On Tue, May 6, 2014 at 8:07 AM, Marcus Engene meng...@engene.se wrote: On 06/05/14 16:58, bricklen wrote: A very quick search shows that rds supports dblink, so perhaps that would work. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/ CHAP_PostgreSQL.html Then I'd need to open our

Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Marcus Engene
On 06/05/14 17:15, bricklen wrote: On Tue, May 6, 2014 at 8:07 AM, Marcus Engene meng...@engene.se mailto:meng...@engene.se wrote: On 06/05/14 16:58, bricklen wrote: A very quick search shows that rds supports dblink, so perhaps that would work.

Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Paul Jungwirth
A very quick search shows that rds supports dblink Then I'd need to open our servers to external visits. This is sort of getting away from Postgres, but if the RDS instance is in a VPC, you could put a VPN on the VPC so dblink wouldn't have to go over the open Internet. Paul On Tue, May 6,

Re: [GENERAL] COPY v. java performance comparison

2014-04-03 Thread Rob Sargent
On 04/02/2014 08:40 PM, Adrian Klaver wrote: On 04/02/2014 05:30 PM, Rob Sargent wrote: On 04/02/2014 06:06 PM, Adrian Klaver wrote: On 04/02/2014 02:27 PM, Rob Sargent wrote: On 04/02/2014 03:11 PM, Adrian Klaver wrote: On 04/02/2014 02:04 PM, Rob Sargent wrote: On 04/02/2014 02:36 PM,

Re: [GENERAL] COPY v. java performance comparison

2014-04-03 Thread Thomas Kellerer
Rob Sargent, 02.04.2014 21:37: I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours (800+ records/sec). Then I tried COPY and killed that after 11.25 hours when I realised that I had added on non-unque index on the name fields after the first load. By that point is was on

Re: [GENERAL] COPY v. java performance comparison

2014-04-03 Thread Andy Colson
On 4/2/2014 7:30 PM, Rob Sargent wrote: Well things slow down over time, and lots of too frequents: Have done 500 batches in 24219 ms Have done 1000 batches in 52362 ms Have done 1500 batches in 82256 ms Have done 2000 batches in 113754 ms Have done 2500 batches in

Re: [GENERAL] COPY v. java performance comparison

2014-04-03 Thread Rob Sargent
On 04/03/2014 09:01 AM, Thomas Kellerer wrote: Rob Sargent, 02.04.2014 21:37: I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours (800+ records/sec). Then I tried COPY and killed that after 11.25 hours when I realised that I had added on non-unque index on the name fields

Re: [GENERAL] COPY v. java performance comparison

2014-04-03 Thread Rob Sargent
Is the java app cpu bound? Also watch vmstat 3 for a minute or two. The last two numbers (wa id) (some vmstat's have a steal, ignore that) will tell you if you are io bound. -Andy During COPY, with autovaccume off (server restarted, manual vacuum to get things going). Immediately

  1   2   3   4   5   6   7   >