RE: [GENERAL] Copy and serial type problem

2001-07-23 Thread Creager, Robert S


You don't indicate how you're executing the copy.  If you're using a
script/program, just add your own counter as the first field in the copy.  I
do this 'all the time' from Perl scripts, and it works fine.  If you're
using psql, I haven't a clue unless you massage the input data before doing
the copy.

Rob

 -Original Message-
 From: Przemyslaw 'Morp][ik' Berlinski
 [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, July 22, 2001 1:13 PM
 To: [EMAIL PROTECTED]
 Subject: [GENERAL] Copy and serial type problem
 
 
 Hi,
 
 I've created a small table:
 
 CREATE TABLE table (
   id int4 DEFAULT nextval('table_idd_seq') NOT NULL,
   file character varying(40) NOT NULL
 );
 
 Table sequence 'table_idd_seq' is also available with a next 
 id number.
 
 Question is how to use postgres copy function with this 'id'. 
 Table 'table'
 is not empty... I've tried to insert nextval(table_idd_seq) 
 into file but
 it doesn't work.
 
 Any help would be appreciated.
 
 TIA
 Morp][ik
 
 
 
 
 ---(end of 
 broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to 
 [EMAIL PROTECTED])
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



RE: [GENERAL] very odd behavior

2001-05-10 Thread Creager, Robert S


desc is a keyword - ORDER BY DESC-ending

Robert Creager
StorageTek
INFORMATION made POWERFUL

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
 
 I have 7.1
 
 Can someone take a look the following
 and tell me why I'm getting errors?
 I'm completely baffled!
 
 
 what= create table bla(desc text,def text,data text);
 ERROR:  parser: parse error at or near desc
 what= create table bla(desc text,def text,data text);
 CREATE
 what=

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] lock in access exclusive and sequence question

2001-04-04 Thread Creager, Robert S


I'm creating a script which will re-claim sequence numbers in a table by
'packing' the existing sequence numbers.  My questions is if I lock the
table in access exclusive mode, and an insert into that table occurs after
the lock, with the insert be blocked before or after the nextval is chosen?

Thanks,
Rob

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] plpgsql trigger on a cascaded delete

2001-03-16 Thread Creager, Robert S


Hey all,

I have 3 tables - A refers to B and C, and A has ON DELETE CASCADE for
referring columns.  I'm trying to delete from B, and through the CASCADE to
A's BEFORE DELETE TRIGGER, SELECT a value from B, and then UPDATE C.  The
problem is that through this path, when I SELECT from B, the selection
criteria doesn't exist - it's already been deleted (until the trigger throws
an exception, then it's back).  Is there an easy way around this?  I'm sure
I can gin up another BEFORE DELETE TRIGGER for B, SELECT everything I want
to delete from A, and delete it in a loop in B's trigger, but was hoping for
an 'easier' solution.  Note that the trigger works fine when deleting from A
directly...

Thanks,
Rob

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



RE: [GENERAL] COPY again...

2001-03-12 Thread Creager, Robert S


If the file is truly CSV (comma separated values), you might want to change
DELIMITERS '\t' to DELIMITERS ','...  Otherwise, include a couple of lines
of data...

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



 -Original Message-
 From: Tony Grant [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 07, 2001 7:34 AM
 To: [EMAIL PROTECTED]
 Subject: [GENERAL] COPY again...
 
 
 Gets me every time this one...
 
 COPY entreprise FROM '/home/pgsql/data/ENTREP_Tony.csv' USING 
 DELIMITERS
 '\t';
 
 gives
 
 No data returned 
 
 What is wrong with my data file - .csv file from Gnumeric export?
 
 Cheers
 
 Tony Grant
 
 
 ---(end of 
 broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



RE: [GENERAL] COPY problem

2001-03-12 Thread Creager, Robert S


Tom believes there may be a memory leak, which would be causing the
(strangely enough) memory problem.  Didn't think about reducing the import
size.  What I might try in that case would be to re-connect to the db
periodically, rather than splitting the file.  The problem becomes
unmanageable after around 15000 entries, so splitting 2.5M lines wouldn't be
pleasant (167 files).

Thanks,
Rob

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



 -Original Message-
 From: Gordon A. Runkle [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 09, 2001 7:18 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [GENERAL] COPY problem
 
 I can't answer the backend question, but how about running
 'split' on the big file, then COPYing these smaller files?
 
 Gordon.
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] COPY problem

2001-03-09 Thread Creager, Robert S


I think this is a question regarding the backend, but...

I'm in the process of changing 1 large table (column wise) into 6 smaller
tables, and ran into a situation.  I'm using Postgresql 7.1beta5, Pg as
included, Perl 5.6, Solaris 2.6 on an Ultra 5.

The new setup is 6 tables, the 'main' table loc with id SERIAL, while the
other 5 (u,b,v,r,i) are identical, id INT4 REFERENCES loc.

What I've done is copy the original table into a file, and am now attempting
to copy from stdin, using Perl/Pg to break out the data into the 6 tables.
I'm working with 2.5 million records btw.  I've narrowed the situation to
occur when copying to any one of the 5 referring tables (COPY u FROM stdin).
The backend process which handles the db connection decides that it needs a
whole lot of memory, although in a nice controlled manner.  The backend
starts with using 6.5Mb, and at 25000 records copied, it's taken 10Mb and
has slowed down substantially.  Needless to say, this COPY will not finish
before running out of memory (estimated 300Mb).  When executing the COPY to
the loc table, this problem does not occur.  Am I going to have to resort to
inserts for the referring tables?  

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [GENERAL] Slowdown problem when writing 1.7million records

2001-03-02 Thread Creager, Robert S


I just joined this list, so pardon if this has been suggested.

Have you tried 'COPY expafh FROM stdin', rather than inserting each record?
I'm managing a 2.5 million record import, creating a btree index on two
columns, and then vacuuming the db in 36 minutes (on an Ultra 5 - similar to
a AMD K6-2 500).  The data is being read from a 600Mb file.

I'm also using 7.1beta5 with the -F flag on the backend (prevents db server
from flushing after each transaction - can be dangerous, but the server is
faster).  I've attached a Perl script I use - the key being the putline
command.

Note that when using COPY, default values, sequences, etc. are not used.  If
you have a SERIAL field, you have to put in the incrementing values
yourself, and then use 'setval' to get thing correct again.

I apologize for the Perl script - it's not commented.  If you have trouble
understanding it, let me know and I'll spruce it up.

Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made POWERFUL



 -Original Message-
 From: Stephen Livesey [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 28, 2001 2:20 AM
 To: Tom Lane
 Cc: [EMAIL PROTECTED]
 Subject: RE: [GENERAL] Slowdown problem when writing 
 1.7million records
 
 
 
 
  No, it's not.  Do you have any triggers or rules on this table that
  you haven't shown us?  How about other tables referencing this one
  as foreign keys?  (Probably not, if you're running an identical test
  on MySQL, but I just want to be sure that I'm not missing 
 something.)
 
 I have no triggers or rules.  I have only created 1 table and 
 their are no
 foreign keys.
 
 
  How exactly are you writing the records?
 
 First I read the data from a 'Powerflex' file and hold this 
 in a record set.
  pfxstmt = pfxconn.createStatement();
  pfxrs = pfxstmt.executeQuery("SELECT * from expafh");
 
 I then perform a loop which writes the data to my 'Postgresql' file as
 follows:
   stmt = conn.createStatement();
   while (pfxrs.next()) {
 cmd = "INSERT INTO expafh VALUES ";
 cmd = cmd +
 "('"+pfxrs.getString(2)+"',"+pfxrs.getString(3)+",'"+pfxrs.get
 String(4)+"','
 "+pfxrs.getString(5)+"')";
 stmt.executeUpdate(cmd);
 }
 
 
  I have a suspicion that the slowdown must be on the client 
 side (perhaps
  some inefficiency in the JDBC code?) but that's only a guess at this
  point.
 
 
 I have used identical code for all of my testing, the only 
 changes being
 which drivers I use to access the data.
 
 
 Thanks
 Stephen Livesey
 
 Legal Disclaimer:
 Internet communications are not secure and therefore Exact 
 Abacus does 
 not accept legal responsibility for the contents of this 
 message.  Any views
 or opinions presented are solely those of the author and do 
 not necessarily
 represent those of Exact Abacus unless otherwise specifically stated.
 
 ---(end of 
 broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send "unregister YourEmailAddressHere" to 
 [EMAIL PROTECTED])
 


 tassivSeed.pl


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]