Re: [GENERAL] resin-cmp, apache1.3.20, ultradev4, and postgresql7.0problems
On 23 Aug 2001 13:05:51 -0400, tomasz konefal wrote: >i'm having trouble getting ultradev to make a connection with a postgresql > database on my server. does anyone have any experience with this? i have a > test .jsp script which can pull data out of a test db, but ultradev doesn't > want to make a connection. > > when selecting the database connection for postgresql (some info downloaded > from http://www.animaproductions.com/ultra.html) we get the following error: > > "While executing inspectConnection in postgresql_jdbc_conn.htm, a JavaScript > error occurred." Where does resin want the postgresql.jar? I have never run resin so I won't be of much help. However the error message seems to indicate that the driver is not found. The location has changed in Tomcat 3.3. I am overloaded with work at the present time so I will wait untill Tomcat 3.3 gets out of beta before modifyimg my production server. Also this really should be on the JDBC list. Cheers Tony Grant ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] valid NULL DATE value
Hello, I am importing via pgaccess a text file from another non-postgres database and the NULL DATE values are written like 00/00/00. What I have tried is replacing 00/00/00 by 9/9/1999 and setting the style to european and I am getting 'can't parse /9/1999' errors. How do I go about importing DATE? Cheers Tony Grant -- RedHat Linux on Sony Vaio C1XD/S http://www.animaproductions.com/linux2.html Macromedia UltraDev with PostgreSQL http://www.animaproductions.com/ultra.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] SELECT FOR UPDATE
Jan Wieck wrote: >Oliver Elphick wrote: > >>Jan Wieck wrote: >> >But the question itself tells that you're about to implement >> >a major design error in your application. Holding database >> >locks during user interaction IS A BAD THING. Never, never >> >ever do it that way. And anybody telling you something >> >different is an overpaid idiot. >> >>I can see arguments to support this view, but consider this classic >>scenario: >> >>User1: Read data into an interactive program >>User1: Start to make changes >>User2: Read data into an interactive program >>User2: Start to make changes >>User1: Save changes >>User2: Save changes >> > >All ERP systems I know deal with that issue by inserting and >deleting some advisory lock information in another table. >Let's say you want to change customers 4711 address. Before >letting you do so on the edit screen, the application tries >to insert "CUST.4711" into a central lock table. Now this >thing has a unique index on that field, so if someone else is >already editing 4711, it'll fail and the application can tell >you so and won't let you do the same. > Unfortunatelly this aproach have a problem. What about if the backend or the application crashes in the middle of editing? This could also be done by adding a field in the record itself and set it every time you edit it and unset it after the edit time. In this case you need to update the record every time you read it :( This job should be done by the DB itself, perhaps this is the way it works right now! >AFAIK it's the only way to deal with that problem. Think >about scaling as well. No enterprise class software has a DB >connection per interactive user. They all have some sort of >DB-middletear-presentation model where many users share a few >DB connections. > Jose Soares ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL]
I have done VACUUM ANALYZE too but the statistics continue preferring sequential scan... Now i'll try to use a different approach: - i'll create the empty table with a CREATE TABLE (and a primary key on col1) - then i'll populate it using then INSERT..SELECT statement - Last i'll check what the statistics say about the SELECT on the primary key query. When i've done, i'll tell you... Denis At 19.03 23/08/01, Doug McNaught wrote: >Denis Gasparin <[EMAIL PROTECTED]> writes: > > > Hi to all! > > I have created a table using the CREATE TABLE new_table > > (col1,col2,col3) AS SELECT col1,col2,col3 FROM org_table. > > > > I create an index on this table using the statement: > > CREATE UNIQUE INDEX table_idx ON new_table (col1). > > Then i do a select as this: > > SELECT * FROM new_table WHERE col1 = 'value'. > > > > The problem is that when i do an explain this is the query plan: > > > > Seq Scan on new_table (cost=0.00..1116.38 rows=500 width=44)> > > > > Can anyone explain me why it doesn't use the index I have created? > >How populated is the table? If it's small, or if you haven't done >VACUUM ANALYZE, the statistics may end up preferring a sequential >scan. > >-Doug >-- >Free Dmitry Sklyarov! >http://www.freesklyarov.org/ > >We will return to our regularly scheduled signature shortly. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL]
Now i have tried creating the table and the inserting... The results are the same... Is it possible that the query planner thinks that is best a sequential scan when an index on the table is present? I'm using postgresql 7.1.3 on a redhat 7.1. Thanks for the help, Denis P.S.: I'm sorry having missed the subject of the mail At 11.54 24/08/01, Denis Gasparin wrote: >I have done VACUUM ANALYZE too but the statistics continue preferring >sequential scan... > >Now i'll try to use a different approach: >- i'll create the empty table with a CREATE TABLE (and a primary key on col1) >- then i'll populate it using then INSERT..SELECT statement >- Last i'll check what the statistics say about the SELECT on the primary >key query. > >When i've done, i'll tell you... > >Denis > >At 19.03 23/08/01, Doug McNaught wrote: >>Denis Gasparin <[EMAIL PROTECTED]> writes: >> >> > Hi to all! >> > I have created a table using the CREATE TABLE new_table >> > (col1,col2,col3) AS SELECT col1,col2,col3 FROM org_table. >> > >> > I create an index on this table using the statement: >> > CREATE UNIQUE INDEX table_idx ON new_table (col1). >> > Then i do a select as this: >> > SELECT * FROM new_table WHERE col1 = 'value'. >> > >> > The problem is that when i do an explain this is the query plan: >> > >> > Seq Scan on new_table (cost=0.00..1116.38 rows=500 width=44)> >> > >> > Can anyone explain me why it doesn't use the index I have created? >> >>How populated is the table? If it's small, or if you haven't done >>VACUUM ANALYZE, the statistics may end up preferring a sequential >>scan. >> >>-Doug >>-- >>Free Dmitry Sklyarov! >>http://www.freesklyarov.org/ >> >>We will return to our regularly scheduled signature shortly. > > >---(end of broadcast)--- >TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] valid NULL DATE value
> I am importing via pgaccess a text file from another non-postgres > database and the NULL DATE values are written like 00/00/00. > > What I have tried is replacing 00/00/00 by 9/9/1999 and setting the > style to european and I am getting 'can't parse /9/1999' errors. Try replacing 00/00/00 with NULL in the text file. If the text file is organized to be read by the COPY command, then use \N instead. len morgan ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] BIGINT datatype and Indexes Failure
Hi to all! Is it possible to define indexes on a column with BIGINT datatype? See this example: testdb=# create table a (col1 integer not null primary key); NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for table 'a' CREATE testdb=# create table b (col1 bigint not null primary key); NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'b_pkey' for table 'b' CREATE testdb=# explain select * from a where col1=123; NOTICE: QUERY PLAN: Index Scan using a_pkey on a (cost=0.00..8.14 rows=10 width=4) EXPLAIN testdb=# explain select * from b where col1=123; NOTICE: QUERY PLAN: Seq Scan on b (cost=0.00..22.50 rows=10 width=8) On table a (INTEGER datatype) the search is done using the index. Instead on table b (BIGINT datatype) the search is always done using the seq scan. Is it a bug? I use Postgresql 7.1.2 on RedHat 7.1. All the regress tests has been completed correctely during installation. Please, let me know as soon as possible... Regards, Eng. Denis Gasparin: [EMAIL PROTECTED] --- Programmer & System Administrator - Edistar srl ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] problems transfering databases
Miroslav Koncar <[EMAIL PROTECTED]> writes: > The only thing I can think of is the PostgreSQL versions. My collegue is > using 7.0.3, and I use 7.1.2. Comments? That wouldn't produce a carriage-return problem. It's possible that we're barking up the wrong tree, and that the issue is not carriage returns but something else --- I just jumped to that conclusion based on the way you presented the error message. But maybe the error message was mangled because you were sloppy about cutting-and- pasting it into your email, or some such. Have you looked at the data to try to narrow down exactly which line is being rejected? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] number of parameters to a stored procedure
"Steven D. Arnold" <[EMAIL PROTECTED]> writes: > It is my understanding that 16 parameters are allowed in a stored > procedure. If this is correct (please advise if not): See INDEX_MAX_KEYS/FUNC_MAX_ARGS in src/include/config.h (or config.h.in if you haven't yet run configure). There are no plans to change the default value, but feel free to build yourself an installation with a larger value. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] BIGINT datatype and Indexes Failure
I search in the archive and i have found that i have to specify the type of the column at the end of the query... so the new query is : explain select * from b where col1=123::int8; In this way, the explain output is correct. The e-mail i found in the archive says that the problem will be solved in some future release At this point, is it safe to use BIGINT datatype and indexes on those fields? Thank for your answers, Regards, Eng. Denis Gasparin: [EMAIL PROTECTED] --- Programmer & System Administrator - Edistar srl At 14.57 24/08/01, Denis Gasparin wrote: >Hi to all! > Is it possible to define indexes on a column with BIGINT > datatype? See this example: > >testdb=# create table a (col1 integer not null primary key); >NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for >table 'a' >CREATE >testdb=# create table b (col1 bigint not null primary key); >NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'b_pkey' for >table 'b' >CREATE >testdb=# explain select * from a where col1=123; >NOTICE: QUERY PLAN: > >Index Scan using a_pkey on a (cost=0.00..8.14 rows=10 width=4) > >EXPLAIN >testdb=# explain select * from b where col1=123; >NOTICE: QUERY PLAN: > >Seq Scan on b (cost=0.00..22.50 rows=10 width=8) > >On table a (INTEGER datatype) the search is done using the index. >Instead on table b (BIGINT datatype) the search is always done using the >seq scan. > >Is it a bug? > >I use Postgresql 7.1.2 on RedHat 7.1. All the regress tests has been >completed correctely during installation. > >Please, let me know as soon as possible... > >Regards, > >Eng. Denis Gasparin: [EMAIL PROTECTED] >--- >Programmer & System Administrator - Edistar srl > > >---(end of broadcast)--- >TIP 3: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] BIGINT datatype and Indexes Failure
Denis Gasparin <[EMAIL PROTECTED]> writes: > On table a (INTEGER datatype) the search is done using the index. > Instead on table b (BIGINT datatype) the search is always done using the > seq scan. Try select * from b where col1 = 123::int8; The query planner is not presently very smart about cross-datatype comparisons (int8 vs int4). We have a TODO item to fix this... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] libpq
Hi! Compiling the following returns me - executing the result.cgi - the error: ./result.cgi: can't load library 'libpq.so.2' gcc -lpq -I/usr/include/mysql -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -L/usr/lib/mysql program.c -o result.cgi -Wall -DUNIX -lmysqlclient -lm All files are at its places. I use PostgreSQL 7.1.3. What I'm doing wrong? Thanx Steve ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Error Codes, JDBC, SQLExceptions
Hi, Im accessesing a postgreSQL db through JDBC, and I would like take make use of the method, getErrorCode in the SQLException class as in java.sql. So far I have been able to retreive exception messages but not the error codes in case of an SQL exception. I would like to know if postgreSQL propegates any errorcodes at all, since I found some post earlier on this list saying it was not implemented, but that post didnt refer to any specific version. If it is could someone also point out to me where these codes are defined? I could ofcourse base my excepion handling on the messages but I'd rather use codes. thanks in advance, Wieger -- Wieger Uffink tel: +31 20 428 6868 fax: +31 20 470 6905 web: http://www.usmedia.nl ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] valid NULL DATE value
Peter Eisentraut, Oliver Elphick, len morgan, Thank you. \N did the trick with pgaccess. I have just cleaned up over 15000 database records in BBEdit (the data was on a Mac server) and now ama trying to remember what I did last time to get my accents right... When will I ever learn to take notes... Still on schedule with this project thanks again Tony -- RedHat Linux on Sony Vaio C1XD/S http://www.animaproductions.com/linux2.html Macromedia UltraDev with PostgreSQL http://www.animaproductions.com/ultra.html ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] store in bytea
But Ill have to add double slashes and even more importent I will have to unescape every binary data going out of the server, which mean in one of my aplication all the Images. Wont this action be much more heavey then the way I used untill now with MySQL (blob) ? addslash on insert and ordinary select for output. Does any one here save his website images on DB and retrive them out on his web page ? > "Ben-Nes Michael" <[EMAIL PROTECTED]> writes: > > On the theoretical issue, can I use TEXT field to store binary ? > > TEXT will not handle null (zero) bytes. If you are using a multibyte > character set, it will likely also do the wrong thing with byte > sequences that are illegal or incomplete multibyte characters. > > Use BYTEA if you want to store arbitrary byte sequences --- that's what > it's for. > > regards, tom lane > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL]
Denis Gasparin <[EMAIL PROTECTED]> writes: > Is it possible that the query planner thinks that is best a sequential scan > when an index on the table is present? Possibly. It all depends on the statistics. You have not shown us the EXPLAIN results obtained after doing VACUUM ANALYZE... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: problems transfering databases
I very much doubt that the data wouldn't "dump right". The dump will include any data that is in the table, including CRs if they are in there. In the current case, the CRs were not already in there, because PostgreSQL would not have recognized that field as null ("\N") if they were. Any transfer of a file via a Windows machine is apt to cause line endings to be silently changed, which would account for this case. The real danger is where the rightmost field is a text type, because the CRs would be silently imported into the new installation, where they could cause real problems with any app that tries to use those fields, and a headache to debug. - Original Message - From: Rob Arnold To: Jeff Eckermann ; Miroslav Koncar Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 6:32 PM Subject: Re: problems transfering databases I've had this problem before. The data is stored correctly in PG, it just doesn't dump right (the CR thing I guess. I didn't know the reason at the time). I think this was on 7.0.x. I worked around it by doing a pg_dump that dumped the table as "proper inserts". That make the load MUCH slower, but it works around the issue. Now that I know what the cause might have been, I guess I better go look at my data . . . --rob - Original Message - From: Jeff Eckermann To: Miroslav Koncar Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 10:59 AM Subject: Re: problems transfering databases Looks like you already have the answer. The error message is a dead giveaway. Some field, in this case having a null value (represented as '\N') contains a carriage return character, like: '\NCR'. PostgreSQL cannot make sense out of this combination, so it returns an error. The CR contained in the error message causes the output to return to the start of the line, outputting the rest of the message from the beginning: see how the final " ' " overwrites the "p" in "psql". Edit out the CR characters, and the problem will go away. This is what Tom already said; I am expanding on it. - Original Message - From: Miroslav Koncar To: Tom Lane Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 3:18 AM Subject: Re: [GENERAL] problems transfering databases Hello Tom, thanks for the answer. This is the procedure, how we tried to transfer the data from one machine to another. My collegue has used the pg_dumpall command, in the following way: pg_dumpall > backups/2001-06-04db.out Since it is a simple test database, the 2001-06-04db.out file was around 40kB, so he has sent it to me via email. I tried to load the data from the file in two different ways: psql -f 2001-08-21db.out cat 2001-06-04db.out | psql but got the error like stated before. I've tried to do this on Solaris and Linux (I've set postgreSQL up on my Linux box for testing purposes), but the same error pops up. Could it have something to do with the versions of the databases, and how it was installed? The version I use is psql (PostgreSQL) 7.1.2 (the output of psql -V command), but I'm not sure what version did my collegue use (I'll find that out, but I think it is 7.x.x). Regards, Miroslav Tom Lane wrote: Miroslav Koncar <[EMAIL PROTECTED]> writes: > 'sql:2001-08-21db.out:5: ERROR: copy: line 1, Bad abstime external > representation '\N > psql:2001-08-21db.out:5: PQendcopy: resetting connection Given the weird line-wrapping of the message, I'm going to bet that the problem is that newlines in the dump file have gotten converted to DOS format (ie, \n became \r\n), and COPY is unhappy because it sees the \r as part of the data. When it echoes the data it didn't like, the presence of the \r messes up the format of the error message. Not sure *how* that happened in a Linux-to-Solaris transfer, though; what tool did you use to transfer the dump file? regards, tom lane-- Miroslav Koncar Software Engineer Ericsson Nikola Tesla ETK/D/R Tel: +385 1 365 3479 Fax: +385 1 365 3548 mailto:[EMAIL PROTECTED]
Re: [GENERAL] problems transfering databases
I think we may be using too many words in this discussion. Perhaps there is a message in this thread that I have not seen. Miroslav, have you tested for carriage returns in your data, and if so, what did you find? If that is the problem, the carriage returns are easily edited out, and speculation about the cause may use more time than it is worth... If that is not the problem, we definitely have something weird. The original error message shows the problem occurring at the first line of your data. Perhaps you could pipe the first few lines through some utility that shows nonprinting characters (like "vis -lw"), and see what that shows? - Original Message - From: "Tom Lane" <[EMAIL PROTECTED]> To: "Miroslav Koncar" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Friday, August 24, 2001 8:13 AM Subject: Re: [GENERAL] problems transfering databases > Miroslav Koncar <[EMAIL PROTECTED]> writes: > > The only thing I can think of is the PostgreSQL versions. My collegue is > > using 7.0.3, and I use 7.1.2. Comments? > > That wouldn't produce a carriage-return problem. > > It's possible that we're barking up the wrong tree, and that the issue > is not carriage returns but something else --- I just jumped to that > conclusion based on the way you presented the error message. But maybe > the error message was mangled because you were sloppy about cutting-and- > pasting it into your email, or some such. Have you looked at the data > to try to narrow down exactly which line is being rejected? > > regards, tom lane > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] libpq
Stephan Bergmann writes: > Compiling the following returns me - executing the result.cgi - the > error: > ./result.cgi: can't load library 'libpq.so.2' > > gcc -lpq -I/usr/include/mysql -I/usr/local/pgsql/include > -L/usr/local/pgsql/lib -L/usr/lib/mysql program.c > -o result.cgi -Wall -DUNIX -lmysqlclient -lm > > All files are at its places. I use PostgreSQL 7.1.3. > > What I'm doing wrong? You didn't mention what platform you're on. -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Re: problems transfering databases
I never transfered the file anywhere, it was all done under linux. The bottom line is pg_dump follwed by a psql import failed exactly like what was described in this thread. When I did a dump with "proper inserts" it worked fine. As I said, I don't know what was causing the problem, just that the CR thing sounded plausible. I only do dumps with "proper inserts" now since I know that this problem can occur and that "proper inserts" is a workaround. I'm just confirming the behavior that someone else reported. --rob Jeff Eckermann <[EMAIL PROTECTED]> said: > I very much doubt that the data wouldn't "dump right". The dump will include any data that is in the table, including CRs if they are in there. > In the current case, the CRs were not already in there, because PostgreSQL would not have recognized that field as null ("\N") if they were. > Any transfer of a file via a Windows machine is apt to cause line endings to be silently changed, which would account for this case. The real danger is where the rightmost field is a text type, because the CRs would be silently imported into the new installation, where they could cause real problems with any app that tries to use those fields, and a headache to debug. > - Original Message - > From: Rob Arnold > To: Jeff Eckermann ; Miroslav Koncar > Cc: [EMAIL PROTECTED] > Sent: Thursday, August 23, 2001 6:32 PM > Subject: Re: problems transfering databases > > > I've had this problem before. The data is stored correctly in PG, it just doesn't dump right (the CR thing I guess. I didn't know the reason at the time). I think this was on 7.0.x. I worked around it by doing a pg_dump that dumped the table as "proper inserts". That make the load MUCH slower, but it works around the issue. Now that I know what the cause might have been, I guess I better go look at my data . . . > > --rob > - Original Message - > From: Jeff Eckermann > To: Miroslav Koncar > Cc: [EMAIL PROTECTED] > Sent: Thursday, August 23, 2001 10:59 AM > Subject: Re: problems transfering databases > > > Looks like you already have the answer. The error message is a dead giveaway. Some field, in this case having a null value (represented as '\N') contains a carriage return character, like: '\NCR'. PostgreSQL cannot make sense out of this combination, so it returns an error. The CR contained in the error message causes the output to return to the start of the line, outputting the rest of the message from the beginning: see how the final " ' " overwrites the "p" in "psql". > Edit out the CR characters, and the problem will go away. > This is what Tom already said; I am expanding on it. > - Original Message - > From: Miroslav Koncar > To: Tom Lane > Cc: [EMAIL PROTECTED] > Sent: Thursday, August 23, 2001 3:18 AM > Subject: Re: [GENERAL] problems transfering databases > > > Hello Tom, > thanks for the answer. This is the procedure, how we tried to transfer the data from one machine to another. > My collegue has used the pg_dumpall command, in the following way: > > pg_dumpall > backups/2001-06-04db.out > > Since it is a simple test database, the 2001-06-04db.out file was around 40kB, so he has sent it to me via email. > I tried to load the data from the file in two different ways: > > psql -f 2001-08-21db.out > cat 2001-06-04db.out | psql > > but got the error like stated before. I've tried to do this on Solaris and Linux (I've set postgreSQL up on my Linux box for testing purposes), but the same error pops up. > > Could it have something to do with the versions of the databases, and how it was installed? The version I use is > psql (PostgreSQL) 7.1.2 (the output of psql -V command), but I'm not sure what version did my collegue use (I'll find that out, but I think it is 7.x.x). > > Regards, > Miroslav > > Tom Lane wrote: > > Miroslav Koncar <[EMAIL PROTECTED]> writes: > > 'sql:2001-08-21db.out:5: ERROR: copy: line 1, Bad abstime external > > representation '\N > > psql:2001-08-21db.out:5: PQendcopy: resetting connection > Given the weird line-wrapping of the message, I'm going to bet that the > problem is that newlines in the dump file have gotten converted to DOS > format (ie, \n became \r\n), and COPY is unhappy because it sees the \r > as part of the data. When it echoes the data it didn't like, the > presence of the \r messes up the format of the error message. > > Not sure *how* that happened in a Linux-to-Solaris transfer, though; > what tool did you use to transfer the dump file? > > regards, tom lane > > -- > Miroslav Koncar > Software Engineer > Ericsson Nikola Tesla > ETK/D/R >
Re: [GENERAL] valid NULL DATE value
Tony Grant wrote: >Hello, > >I am importing via pgaccess a text file from another non-postgres >database and the NULL DATE values are written like 00/00/00. > >What I have tried is replacing 00/00/00 by 9/9/1999 and setting the >style to european and I am getting 'can't parse /9/1999' errors. > >How do I go about importing DATE? With INSERT, leave the field out of the list or insert NULL. With COPY, specify NULL as \N. (You can redefine that - see COPY syntax.) If you have a flat file with 00/00/00 in it, you can modify it with sed: sed -e 's|00/00/00|\\N|g' flatfile > newflatfile -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C "I saw in the night visions, and, behold, one like the Son of man came with the clouds of heaven, and came to the Ancient of days, and they brought him near before him. And there was given him dominion, and glory, and a kingdom, that all people, nations, and languages, should serve him; his dominion is an everlasting dominion, which shall not pass away, and his kingdom that which shall not be destroyed." Daniel 7:13,14 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Error Codes, JDBC, SQLExceptions
Wieger Uffink writes: > I would like to know if postgreSQL propegates any errorcodes at all, No. -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Error Codes, JDBC, SQLExceptions
hehe thank you for your elaborate though very clear reply :) thanks really Wieger Peter Eisentraut wrote: > > Wieger Uffink writes: > > > I would like to know if postgreSQL propegates any errorcodes at all, > > No. > > -- > Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter -- Wieger Uffink tel: +31 20 428 6868 fax: +31 20 470 6905 web: http://www.usmedia.nl ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Error Codes, JDBC, SQLExceptions
Wieger, The server does not have a concept of error codes currently (it is on the TODO list). Therefore the JDBC driver has no error code to report since it doesn't get one from the backend. When the server supports error codes the JDBC driver will as well. thanks, --Barry Wieger Uffink wrote: > Hi, > > Im accessesing a postgreSQL db through JDBC, and I would like take make > use of the method, getErrorCode in the SQLException class as in > java.sql. > > So far I have been able to retreive exception messages but not the error > codes in case of an SQL exception. > > I would like to know if postgreSQL propegates any errorcodes at all, > since I found some post earlier on this list saying it was not > implemented, but that post didnt refer to any specific version. If it is > could someone also point out to me where these codes are defined? > > I could ofcourse base my excepion handling on the messages but I'd > rather use codes. > > thanks in advance, > Wieger > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] embedded SQL cursos declare fails
OK, I have applied the following patch. Interestingly, the SGML code marked this block as outdated, but didn't remove it for some reason. > On Tue, 19 Jun 2001, Michael Meskes wrote: > > > On Mon, Jun 18, 2001 at 03:59:56PM -0400, Thalis A. Kalfigopoulos wrote: > > > EXEC SQL DECLARE my_cursor CURSOR FOR SELECT a,b FROM lala WHERE a= :i; > > > EXEC SQL FETCH FORWARD NEXT FROM my_cursor INTO :tmpa,:tmpb; > > > > Is there an EXEC SQL OPEN CURSOR my_cursor in between these lines? > > > > > I check the .c file generated by ecpg and the cursor declaration is commented >out :-( > > > > That's correct. It's placed at the OPEN spot. > > Correct, it works. But I was confused because the documentation >(http://www.postgresql.org/idocs/index.php?ecpg-develop.html) mentions: > > Open cursor statement > An open cursor statement looks like: > exec sql open cursor; > and is ignore and not copied from the output. > > This gave me the impression that no explicit OPEN is necessary to operate on the >cursor (normal Pg behavior). If not just my misunderstanding, someone rephrase thin >in the docs. > > > cheers, > thalis > > > > > Michael > > -- > > Michael Meskes > > [EMAIL PROTECTED] > > Go SF 49ers! Go Rhein Fire! > > Use Debian GNU/Linux! Use PostgreSQL! > > > > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 Index: doc/src/sgml/ecpg.sgml === RCS file: /home/projects/pgsql/cvsroot/pgsql/doc/src/sgml/ecpg.sgml,v retrieving revision 1.18 diff -c -r1.18 ecpg.sgml *** doc/src/sgml/ecpg.sgml 2000/12/22 21:51:57 1.18 --- doc/src/sgml/ecpg.sgml 2001/08/24 17:52:43 *** *** 805,857 - - -Open cursor statement - - -An open cursor statement looks like: - - exec sql open cursor; - -and is ignore and not copied from the output. - - - - - -Commit statement - - -A commit statement looks like - - exec sql commit; - -and is translated on the output to - - ECPGcommit(__LINE__); - - - - - - -Rollback statement - - -A rollback statement looks like - - exec sql rollback; - -and is translated on the output to - - ECPGrollback(__LINE__); - - - - - - Other statements --- 805,810 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] valid NULL DATE value
Oliver Elphick writes: > Tony Grant wrote: > >Hello, > > > >I am importing via pgaccess a text file from another non-postgres > >database and the NULL DATE values are written like 00/00/00. > > > >What I have tried is replacing 00/00/00 by 9/9/1999 and setting the > >style to european and I am getting 'can't parse /9/1999' errors. > > > >How do I go about importing DATE? > > With INSERT, leave the field out of the list or insert NULL. > > With COPY, specify NULL as \N. (You can redefine that - see COPY syntax.) Hmm, you could use COPY ... WITH NULL AS '00/00/00', though that may spell trouble if any non-date fields usefully contain that value. (Probably unlikely.) -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] store in bytea
"Ben-Nes Michael" <[EMAIL PROTECTED]> writes: > On the theoretical issue, can I use TEXT field to store binary ? TEXT will not handle null (zero) bytes. If you are using a multibyte character set, it will likely also do the wrong thing with byte sequences that are illegal or incomplete multibyte characters. Use BYTEA if you want to store arbitrary byte sequences --- that's what it's for. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly