Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
 Is that the *first* error message you got?


Yes it is.

In fact I made a mistake in the first email, so instead:
INSERT INTO A ( Col1, Col2 )
VALUES (2, '-- any text' );

please change with:
INSERT INTO A ( Col1, Col2 )
VALUES (1, '-- any text' );

However I suppose this doesn't change the problem :(.

Regards,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Richard Huxton
Sabin Coanda wrote:
 Hi,
 
 I have PostgreSQL 8.3.5, compiled by Visual C++ build 1400 on Windows OS.
 
 I try to use the COPY command to optimize the backup/restore performance, 
 but I found a problem. I reproduce it below.

I can't reproduce it here on 8.3 on linux.

 I backup the database plain with the command:
  pg_dump.exe -U postgres -F p -v -f backup_plain.sql DemoDB
 
 I create a new database, and I run the script. But it rise me the error:
   ERROR:  syntax error at or near 1
   LINE 49: 1 -- any text
 
 I look for the error line and I saw how pg_dump created the script 
 statement:
   COPY A (Col1, Col2) FROM stdin;
   1 -- any text
   \.

That's what I see too, and it's fine here.

Try trimming the file down to just those lines, manually create the
database and table and see if you can run the copy then. If so, then
Tom's right and there's an error before the COPY. If not, then you've
got something odd in the file (bad line-ending, invalid high-bit
character or some such).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
Sorry, my fault that I run the script in the query window of pgAdmin, not in 
the system console. I check it again in the system console and it works 
well.

Thanks,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-19 Thread Tom Lane
Sabin Coanda [EMAIL PROTECTED] writes:
 I backup the database plain with the command:
  pg_dump.exe -U postgres -F p -v -f backup_plain.sql DemoDB

 I create a new database, and I run the script. But it rise me the error:
   ERROR:  syntax error at or near 1
   LINE 49: 1 -- any text

 I look for the error line and I saw how pg_dump created the script 
 statement:
   COPY A (Col1, Col2) FROM stdin;
   1 -- any text
   \.

Is that the *first* error message you got?

My guess is that something went wrong with the COPY command, so that
psql failed to switch into copy-data mode and is trying to interpret
the following stuff as SQL commands.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem

2003-07-16 Thread Andrew Gould
--- Ron [EMAIL PROTECTED] wrote:
 I am creating a new database on a brand new server
 (P4, 1GB RAM, 
 postgres 7.3.3, debian 3.0) and trying to populate
 one of the tables 
 with the COPY command. I split a large file with 20
 million records into 
 20 files, but when I run COPY I usually get the
 following message:
 
 analytics=# COPY tbl555 FROM '/usr/local/pgsql/xaa'
 WITH NULL AS '';
 ERROR:  copy: line 167641, Query was cancelled.
 
 The line number varies each time I run it, and
 occasionally succeeds. I 
 split up the data into even smaller files (100,000
 rows) with the same 
 results. Then I tried it on one of our older debian
 boxes (version ?) 
 with postgres 7.3 and the COPY's succeed. I turned
 up logging on the new 
 server but the messages returned are:
 Jul 16 09:04:43 imp postgres[31180]: [72-2] 
 Free/Avail. Space 0/0; 
 EndEmpty/Avail. Pages 0/0.
 Jul 16 09:04:43 imp postgres[31180]: [72-3] ^ICPU
 0.00s/0.00u sec 
 elapsed 0.00 sec.
 Jul 16 09:04:43 imp postgres[31180]: [73-1] DEBUG: 
 Index 
 pg_toast_4070343_index: Pages 1; Tuples 0.
 Jul 16 09:04:43 imp postgres[31180]: [73-2] ^ICPU
 0.00s/0.00u sec 
 elapsed 0.00 sec.
 Jul 16 09:04:43 imp postgres[31180]: [74] DEBUG: 
 CommitTransactionCommand
 Jul 16 09:04:57 imp postgres[31180]: [75] DEBUG: 
 StartTransactionCommand
 Jul 16 09:04:57 imp postgres[31180]: [76] DEBUG: 
 ProcessUtility
 
 analytics=# COPY tc555 FROM '/usr/local/pgsql/xaa'
 WITH NULL AS '';
 DEBUG:  StartTransactionCommand
 DEBUG:  ProcessUtility
 ERROR:  copy: line 218765, Query was cancelled.
 DEBUG:  AbortCurrentTransaction
 ERROR:  copy: line 218765, Query was cancelled.
 
 
 I set the following values in postgresql.conf:
 tcpip_socket = true
 shared_buffers = 1000# min max_connections*2
 or 16, 8KB each
 max_fsm_relations = 1000# min 10, fsm is free
 space map, ~40 bytes
 sort_mem = 32168# min 64, size in KB
 vacuum_mem = 64336# min 1024, size in KB
 fsync = false
 effective_cache_size = 32768# typically 8KB each
 geqo_threshold = 25
 server_min_messages = debug5# Values, in order
 of decreasing detail:
 syslog = 1# range 0-2
 LC_MESSAGES = 'C'
 LC_MONETARY = 'C'
 LC_NUMERIC = 'C'
 LC_TIME = 'C'
 
 If someone has any ideas what the problem may be,
 please let me know. I 
 am thinking that it may involve one of the
 following, but am not sure which:
 - my configuration file changes
 - ram
 - driver issues - mother board = asus p4 p800
 - driver issues - hard drive (IDE)
 - ?
 
 Thanks in advance
 Ron

Does your postgresql partition have enough space?

I was testing a Linux distro recently and gave /usr
lots of space, forgetting that Linux distros often
install postgresql and mysql to /var/db (or
/var/lib/db --  I forget).

If this is your problem (and you don't want to
reinstall Debian), stop the database server, move the
postgresql directory to a spacious partition, and
create a link to it in the original location.

Best of luck,

Andrew Gould

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] COPY problem

2001-03-14 Thread Tom Lane

"Creager, Robert S" [EMAIL PROTECTED] writes:
 I'm working with 2.5 million records btw.  I've narrowed the situation to
 occur when copying to any one of the 5 referring tables (COPY u FROM stdin).
 The backend process which handles the db connection decides that it needs a
 whole lot of memory, although in a nice controlled manner.  The backend
 starts with using 6.5Mb, and at 25000 records copied, it's taken 10Mb and
 has slowed down substantially.  Needless to say, this COPY will not finish
 before running out of memory (estimated 300Mb).  When executing the COPY to
 the loc table, this problem does not occur.  Am I going to have to resort to
 inserts for the referring tables?  

It turns out that the main bug here is not that you see a memory leak
for the referring tables, but that you fail to see one for the referred
table :-(.  We implement foreign key constraints via AFTER EVENT
triggers, and the current implementation of such triggers requires
saving information about each insert/update/delete event occurring
during the current transaction.  So that's where the memory is going.

I noticed that the trigger code was being a little bit stupid about
calculating the amount of memory it needed to allocate, so I modified
that, for a net savings of perhaps a third of the per-tuple memory.
But that's about as much as we can do about the issue for 7.1.

I think the code could be made smarter --- in particular, I doubt that
it's really necessary to save info about INSERT events when you have
only AFTER UPDATE and/or AFTER DELETE triggers.  But this seems too
delicate a change to risk making at this point in the 7.1 cycle.  We'll
just have to live with it for awhile longer.

In the meantime I'd suggest limiting the number of tuples that you
insert per transaction.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] COPY problem

2001-03-12 Thread Gordon A. Runkle

In article [EMAIL PROTECTED],
"Creager, Robert S" [EMAIL PROTECTED] wrote:

 I think this is a question regarding the backend, but...

[snip]

 (COPY u FROM stdin). The backend process which handles the db connection
 decides that it needs a whole lot of memory, although in a nice
 controlled manner.  The backend starts with using 6.5Mb, and at 25000
 records copied, it's taken 10Mb and has slowed down substantially. 
 Needless to say, this COPY will not finish before running out of memory
 (estimated 300Mb).  When executing the COPY to the loc table, this
 problem does not occur.  Am I going to have to resort to inserts for the
 referring tables?  

I can't answer the backend question, but how about running
'split' on the big file, then COPYing these smaller files?

Gordon.

-- 
It doesn't get any easier, you just go faster.
   -- Greg LeMond

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



RE: [GENERAL] COPY problem

2001-03-12 Thread Creager, Robert S


Tom believes there may be a memory leak, which would be causing the
(strangely enough) memory problem.  Didn't think about reducing the import
size.  What I might try in that case would be to re-connect to the db
periodically, rather than splitting the file.  The problem becomes
unmanageable after around 15000 entries, so splitting 2.5M lines wouldn't be
pleasant (167 files).

Thanks,
Rob

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



 -Original Message-
 From: Gordon A. Runkle [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 09, 2001 7:18 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [GENERAL] COPY problem
 
 I can't answer the backend question, but how about running
 'split' on the big file, then COPYing these smaller files?
 
 Gordon.
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html