Re: [GENERAL] What are the consequences of a bad database design (never seen
Thanx guys for the advices. i think i will have to find some "POLITICAL" approach in order to restructure the existing database, which i am not so good at but worse trying. Note that even the code is Bad (they are using PHP for a big application - no object oriented design - a lot of code redundancy ...). However, it seems difficult to fix the database bit by bit cause as far as i have seen one or more primary TAble(s) are missing !! So instead of using an ID, 3-4 fields are being rewritten in almost every table ! So if i have to build the primary tables, i have to change all the other tables replacing the combined fields with the corresponding ID ... and there is many others modifications which could lead to eventuel code modification even if i change the Views in order to mask the changes. (Thanx god they are using Views !) Anyways it seems i have a major modification that will need time and they are giving me Time for adding modules not the time for fixing the existing. So basically what is happening is du to the bad database and code design: Writing a simple Task is becoming difficult and requires minimum 4 times more time than in the case of a good design. So development time is wasted, and data Corrections are being done almost every day by the stuff here ... Thanx again. Jinane. _ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostGreSQL (7.3?) recovery, Mac OS X (10.3.8)
Alvaro Herrera <[EMAIL PROTECTED]> writes: > What version is this exactly? IIRC there was a version of 7.3 that > would refuse to start if the last XLog record fell at the edge of a > segment. I may be misremembering though (i.e. maybe it was one of the > 7.4 series), plus I can't find the relevant entry in the release notes. If I'm reading the CVS history correctly, the bug existed only in the 7.3.3 release; here's the CVS log entry for the fix: 2003-07-17 12:45 tgl * src/backend/access/transam/xlog.c (REL7_3_STABLE): Repair boundary-case bug introduced by patch of two months ago that fixed incorrect initial setting of StartUpID. The logic in XLogWrite() expects that Write->curridx is advanced to the next page as soon as LogwrtResult points to the end of the current page, but StartupXLOG() failed to make that happen when the old WAL ended exactly on a page boundary. Per trouble report from Hannu Krosing. and this seems to be what Bruce boiled it down to in the 7.3.4 release notes: * Prevent rare possibility of server startup failure (Tom) Personally I always look at the CVS history when searching for bug histories. cvs2cl.pl is a wonderful tool ... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostGreSQL (7.3?) recovery, Mac OS X (10.3.8)
"Eric D. Nielsen" <[EMAIL PROTECTED]> writes: > PANIC: XLogWrite: write request 0/2364000 is past end of log 0/2364000 This is a known corner-case bug in some 7.3 releases. If you care about getting the data out of it, you can update-in-place to the latest 7.3 release. If not, well, 7.3 was a long time ago ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] PostGreSQL (7.3?) recovery, Mac OS X (10.3.8)
On Tue, Apr 12, 2005 at 08:30:39PM -0400, Eric D. Nielsen wrote: > LOG: ReadRecord: unexpected pageaddr 0/364000 in log file 0, segment > 2, offset 3555328 > LOG: redo is not required > PANIC: XLogWrite: write request 0/2364000 is past end of log 0/2364000 > LOG: startup process (pid 784) was terminated by signal 6 > LOG: aborting startup due to startup process failure > > The startup fails. The timestamp on the first log appears to be during > the boot process of the computer. > > PG_VERSION reports 7.3. This is just a "play" database that I use for > random testing. I believe its the version that came with the > PowerBook, unless its been updated by "Software Update", which I would > doubt. I believe I last used the database back around January. What version is this exactly? IIRC there was a version of 7.3 that would refuse to start if the last XLog record fell at the edge of a segment. I may be misremembering though (i.e. maybe it was one of the 7.4 series), plus I can't find the relevant entry in the release notes. -- Alvaro Herrera (<[EMAIL PROTECTED]>) "Para tener más hay que desear menos" ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] PostGreSQL (7.3?) recovery, Mac OS X (10.3.8)
After an extended vacation from development activities on my computer, I tried to get back to work, but the PostGreSQL backend was no longer running. (Normally its part of my startup scripts.) I tried su'ing over to postgres and running pg_ctl start. No luck, seems like it lost the path. Type the full path /usr/local/bin/pg_ctrl start, complains about missing PGDATA. Looks like all the environment variables are gone. (I had read that 10.3 had some issues with shared memory, but this seems unusual? ) Providing the full path to the data directory /usr/local/bin/pg_ctl -D /usr/local/pgsql/data start LOG: database system shutdown was interrupted at 2005-04-12 20:11:05 EDT LOG: checkpoint record is at 0/2363FC0 LOG: redo record is at 0/2363FC0; undo record is at 0/0; shutdown TRUE LOG: next transaction id: 38330; next oid: 36038 LOG: database system was not properly shut down; automatic recovery in progress LOG: ReadRecord: unexpected pageaddr 0/364000 in log file 0, segment 2, offset 3555328 LOG: redo is not required PANIC: XLogWrite: write request 0/2364000 is past end of log 0/2364000 LOG: startup process (pid 784) was terminated by signal 6 LOG: aborting startup due to startup process failure The startup fails. The timestamp on the first log appears to be during the boot process of the computer. PG_VERSION reports 7.3. This is just a "play" database that I use for random testing. I believe its the version that came with the PowerBook, unless its been updated by "Software Update", which I would doubt. I believe I last used the database back around January. I believe everything of "use" was already migrated a beta server; however I would like to attempt any sort of recovery possible first, if possible. What should I do next? Eric ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] unsubscribe
---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] What are the consequences of a bad database design
But the people i am working with are not considering the restructuring of the database. They are even thinking of expanding it by adding new modules. Please can someone advise me, or tell me what to do, what may be the consequences My advice is to not go to them with the "we need to totally reengineer the schema for the next 6 months so that we have the same functionality we have now" approach. Instead figure out what the next module they want to add is and what parts of the system it will touch upon and then see about reengineering those particular parts of the schema. The bit by bit approach should get them to the same end game with stalling development for the next few months. Make sure to make use of views and stored procedures to help keep backwards compatibility where you can't convince people to do code modifications. HTH. I second this advice. Remember that (to business people) bad database design is not a critical problem unless it interferes with critical business goals. What I would do is to lay low and do what they tell you for several months, making your own observations about how the database actually works and keeping track of potential data design fixes (i.e., combine these tables, replace this column with a foreign key to that one, etc). If the database design is as bad as you say, in not too long you will come to a situation that would be best handled with normalized data. Then bring it up, saying, "We could re-structure the data to look like this so that X query could be computed with a single select with minimal rewriting of module A and a few queries in module B." At that point, your employers will hopefully say, "Oh, yes, that's a very clever solution." Then you implement your fix and take the credit for this. Ideally, over time, you will build up a reputation as a problem solver and gain more responsibility for database design and thus have more ability to fix the underlying problems. Regards, Paul Tillotson ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] psql vs perl prepared inserts
Matt Van Mater <[EMAIL PROTECTED]> writes: > I've been experimenting with loading a large amount of data into a > fairly simple database using both psql and perl prepared statements. > Unfortunately I'm seeing no appreciable differences between the two > methods, where I was under the impression that prepared statements > should be much faster (in my case, they are slightly slower). They should be faster ... if the client library is really using server-side prepared statements, and not trying to fake it. Up till 7.4 came out there wasn't any very useful way to use server-side prepared statements in a driver, and I think only the very latest version of DBD::Pg knows anything about it. See this thread for some recent discussion: http://archives.postgresql.org/pgsql-interfaces/2005-04/msg00029.php regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] psql vs perl prepared inserts
I've been experimenting with loading a large amount of data into a fairly simple database using both psql and perl prepared statements. Unfortunately I'm seeing no appreciable differences between the two methods, where I was under the impression that prepared statements should be much faster (in my case, they are slightly slower). I have included a pseudocode/subset of the perl code I use below. You can see the prepare statement outside the loop and the execute statement inside the loop. Alternatively you can see that I write every INSERT statement to a text file which I then load by using `psql dbname -f bulksql_load.sql`. Normally I only have either the prepare or the print-to-file in the loop, but i've included both to make the pseudocode smaller. Using a simple `time` command from my system it looks like the execution time for loading all the data in both scenarios is about 50 seconds. FYI, the infile has 18k lines, and after parsing and the db enforcing uniqueness there are 15.5k rows in the results table of the db. This works out to ~300 inserts per second with on pgsql 7.3.2 with fsync turned off. I think that is a decent performance for this old box, I'm just confused as to why the prepared statements don't seem to give any speed boost as advertised. Could the fact that many of my inserts have 15 columns slow down the prepared statements to the point where they're no longer useful as a speed enhancement? Or perhaps it's because I'm explicitly inserting each field/value pair, even if many of them are null (which I think is generally considered a good habit). Any other suggestions you might have would be welcome. Please forgive me if I should have posted this to pgsql-performance or some perl list, but I think the comparison with psql makes it relevant. Hopefully this won't be too messy: #!/usr/bin/perl open (IN,"scan.nbe"); open (OUT,">bulksql_load.sql"); use DBI; $dbh = DBI->connect("dbi:Pg:dbname=nessus"); $sth_res = $dbh->prepare("INSERT INTO results (scan_id,subnet,host,service,plugin_id,type,description,see_also,solution,risk_factor,cve_id,bid_id,port,proto,service_name) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"); $sth_tim = $dbh->prepare("INSERT INTO timestamps (scan_id,hostip_or_nul,start_or_end,scan_time) VALUES (?,?,?,?)"); while (){ if (/^timestamps/){ parse_ts(); $sth_tim->execute($scan_id,$hostip_or_nul,$start_or_end,$scan_time); print OUT "INSERT INTO timestamps (scan_id,hostip_or_nul,start_or_end,scan_time) VALUES ($scan_id,$hostip_or_nul,$start_or_end,$scan_time)\n"; } elsif (/^results/) { parse_res(); $sth_res->execute($scan_id,$subnet,$host,$service,$plugin_id,$type,$description,$see_also,$solution,$risk_factor,$cve_id,$bid_id,$port,$proto,$service_name); print OUT "INSERT INTO results (scan_id,subnet,host,service,plugin_id,type,description,see_also,solution,risk_factor,cve_id,bid_id,port,proto,service_name) VALUES ($scan_id,$subnet,$host,$service,$plugin_id,$type,$description,$see_also,$solution,$risk_factor,$cve_id,$bid_id,$port,$proto,$service_name)\n"; } } ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Composite type versus Domain constraints.
James Robinson <[EMAIL PROTECTED]> writes: > insert into simple_table values (null, '(43)'); -- GRR works!!! It'll > let any smallint in. What happened to the constraint? The composite-type input routine doesn't check any constraints ... and that includes domains. You can make it work if you don't use a composite literal: egression=# insert into simple_table values (null, row(43)); ERROR: value for domain "simple" violates check constraint "limits" Obviously this whole area needs improvement. Domain constraints in particular fail to be enforced in many places where they should be, such as plpgsql variables. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Composite type versus Domain constraints.
I'm trying to experiment with domains and composite types under 8.0.2. It seems that domain constraints don't fire when the domain is embedded within a composite type: --- create domain simple as smallint default 0 constraint limits check (VALUE IN (0,1,2,3)); create type comp_simple as ( simp_val simple); create table simple_table ( s1 simple, s2 comp_simple ); insert into simple_table values (2, null); -- works fine -- 2 is legal value insert into simple_table values (43, null); -- errors out correctly -- 43 fails the constraint test. insert into simple_table values (null, '(43)'); -- GRR works!!! It'll let any smallint in. What happened to the constraint? select * from simple_table; social=# select * from simple_table; s1 | s2 +-- 2 | | (43) (2 rows) The 8.0.2 docs for composite types (doc/html/rowtypes.html) mention using domain types as members of composite types to actually gain constraint testing capability within composite types used outside of tables. We've also tried inverting the relationship between the domain and composite type: social=# create type simple as ( val int2); CREATE TYPE social=# create domain simple_checked as simple default '(0)' constraint limits check ((VALUE).val IN (0,1,2,3)); ERROR: "simple" is not a valid base type for a domain social=# Any way I can get a composite type with constraint tests? I need an additional type with a separate oid for object / relational mapping. James Robinson Socialserve.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] client interfaces
On Tue, Apr 12, 2005 at 10:51:09 -0400, David Parker <[EMAIL PROTECTED]> wrote: > Is there anything like a client library for postgres that does not use > tcp/ip as the protocol? As part of the performance tuning of our > application, the question was posed to me whether there was a more > "direct" sort of API that would not require going through the tcp/ip > stack. I assume the answer to this is "no", and I see no hint of > anything in the source code, but I thought I would ask the question, > just to be sure. You can use domain sockets when on the same host as the server. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] pg 8.0.1-r3 killls pgadmin3 and phppgadmin
You need a newer pgadmin --- pg_database.datpath went away in 8.0. I'm installing the new version. Thanks. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] pg 8.0.1-r3 killls pgadmin3 and phppgadmin
PFC <[EMAIL PROTECTED]> writes: > ERREUR: 42703: la colonne <> n'existe pas > EMPLACEMENT : transformColumnRef, parse_expr.c:1099 > Do you know if this is normal, should this column exist, is it a > problem > with pgadmin ? You need a newer pgadmin --- pg_database.datpath went away in 8.0. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg 8.0.1-r3 killls pgadmin3 and phppgadmin
pgadmin3 does this (from pg's log with level set at debug5): INSTRUCTION : SELECT db.oid, datname, datpath, datallowconn, datconfig, datacl, pg_encoding_to_char(encoding) AS serverencoding, pg_get_userbyid(datdba) AS datowner,has_database_privilege(db.oid, 'CREATE') as cancreate FROM pg_database db ORDER BY datname ERREUR: 42703: la colonne <> n'existe pas EMPLACEMENT : transformColumnRef, parse_expr.c:1099 Do you know if this is normal, should this column exist, is it a problem with pgadmin ? Thanks ! On Mon, 11 Apr 2005 18:42:29 +0200, Tom Lane <[EMAIL PROTECTED]> wrote: PFC <[EMAIL PROTECTED]> writes: I have no idea what to type in gbd to get the trace, though What I usually do is - start a psql session - in another window, find out the PID of the backend attached to the psql session, and do $ gdb /path/to/postgres backend_PID ... gdb> b errfinish gdb> cont - go back to psql session and issue problem command - when gdb stops execution, do gdb> bt ... useful printout is here ... gdb> quit sure you want to exit? y regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg 8.0.1-r3 killls pgadmin3 and phppgadmin
Sometimes life has an irony of itself. Today I modified some of my gentoo USE flags for something totally unrelated to postgres. Tonight I built postgres in debug mode : the offending query worked. I thught "hm." I rebuilt it without debug, and it still works. I don't know what made it NOT to work before, I sure didn't hallucinate. It must be some obscure incompatibility deep inside the Gentoo package and build manager... phppgadmin works, and pgadmin doesn't, telling me the "datapath" column doesn't exist, no idea what this means. I'm going to look into it. Thanks for your help and sorry about bothering you ! On Mon, 11 Apr 2005 18:42:29 +0200, Tom Lane <[EMAIL PROTECTED]> wrote: PFC <[EMAIL PROTECTED]> writes: I have no idea what to type in gbd to get the trace, though What I usually do is - start a psql session - in another window, find out the PID of the backend attached to the psql session, and do $ gdb /path/to/postgres backend_PID ... gdb> b errfinish gdb> cont - go back to psql session and issue problem command - when gdb stops execution, do gdb> bt ... useful printout is here ... gdb> quit sure you want to exit? y regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Please help to speed up UPDATE statement
On Apr 12, 2005 9:40 AM, Andrus <[EMAIL PROTECTED]> wrote: The following statement runs VERY slowly on large tables.Any idea how to speed it up ?UPDATE rid SET dokumnr=NULL WHERE dokumnr NOT IN (SELECT dokumnr FROM dok);Tables:CREATE TABLE dok ( dokumnr INTEGER, CONSTRAINT dok_pkey PRIMARY KEY (dokumnr) );CREATE TABLE rid ( dokumnr INTEGER );CREATE INDEX rid_dokumnr_idx ON rid (dokumnr);Andrus---(end of broadcast)---TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqPlease send your explain plan everyone will want to see it to tell what is going on.
[GENERAL] generating a parent/child relationship in a trigger
Suppose I'm adding row to a table, and one of the columns is a pathname. I would like to generate in a separate table parent/child relationships based on the path. For example, adding "/foo/bar/baz" and "/foo/bar/bot" would generate the following relationships parent child -- - /foo/foo/bar /foo/bar/foo/bar/baz /foo/bar/foo/bar/bot Is this a reasonable thing to do in a trigger? Does anybody have a pl/sql snippet to do this? Many TIA, Mark -- Mark Harrison Pixar Animation Studios ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Encoding Issue with UNICODE
On Apr 12, 2005, at 6:39 AM, Fritz Bayer wrote: But in which encoding? I guess utf8 or utf16... But why doesn that fail only for äüö? Shouldn't any other letter encoded in utf16 also fail? I mean unicode itself is 16 bit long. So "münchen" should expand to 14 characters. But only ü expands to two characters. PostgreSQL only supports utf-8. There has been discussion of using a label other than "unicode" to make this more apparent. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Suitable Database version
On Tue, 2005-04-12 at 12:27, Santharam wrote: > Hi All, > I am recently planning to use PostgreSQL to be my backend for > my Java web based application in JBoss application server. This is to > be moved to the production environment. I don't want to change the > postgresql version for next few years. So, I want a stable version of > postgresql to be used for this. Can anybody tell me the most suitable > version for my need. Of course, I am expecting the database to be > huge. I'd go straight to 8.0.2 if you're just getting started. By the time you have anything really ready for production, 8.0 will have gone from stable to very stable. The extra features 8.0 has (table spaces, improved query planner, PITR, and many more) make it a great choice for large databases. Whether you stick to one major version for years or not, make sure to keep up to date on the patch level releases. It's not uncommon for people to show up on the list running a truly old version of postgresql, with known but fixed bugs, that they could have avoided if they'd updated to the latest patch level fixed version (i.e. 7.4.7 where the .7 is the patch or revision number) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Encoding Issue with UNICODE
[EMAIL PROTECTED] ("Daniel Verite") wrote in message news:<[EMAIL PROTECTED]>... > Fritz Bayer wrote: > > > I have a java program, which writes words containing german umlauts > > like äöü into the database. As you probably know, those characters > > belong to the ISO-8859-1 character encoding set. > > > > In my java webapplication those umlauts (äöü) get displayed correctly. > > So they actually get stored correctly in the database. > > > > However, when I use postgresql's psql client I those characters get > > displayed incorretly. > > > > For example the city name "münchen" gets displayed as "mÃ?nchen". Not > > so in my webapplication. There the city name in the HTML code appears > > corretly as "münchen". > > > > So why is psql not displaying the unicode characters correclty? Or > > could it be that my xterm can not handle unicode characters? > > From your description it really looks like the latter. You can issue > \encoding latin1 > inside psql > Thanks for you help. Now I undestand. It's true somehow my terminal does not handle unicode characters. After I entered "\encoding latin1" as you suggested everything works fine. So the answer is that without that unicode characters get displayed. But in which encoding? I guess utf8 or utf16... But why doesn that fail only for äüö? Shouldn't any other letter encoded in utf16 also fail? I mean unicode itself is 16 bit long. So "münchen" should expand to 14 characters. But only ü expands to two characters. > or you can also set the PGCLIENTENCODING environment variable to latin1 > before launching psql on non-unicode aware terminals. > > > Can somebody help me out here? Should I create the databases as LATIN1 > > instead of UNICODE? And how can I transform my current databases into > > LATIN1 ones? They should be compatible, because all characters I use > > are only äöü, which are downward compatible. > > But then you'll have trouble with your java app if you do that. Java works > with > unicode strings, so it makes sense to have the db contents in unicode as well. No thats ok. Java communicates with psql using unicode only. That's why it also worked... ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Please help to speed up UPDATE statement
The following statement runs VERY slowly on large tables. Any idea how to speed it up ? UPDATE rid SET dokumnr=NULL WHERE dokumnr NOT IN (SELECT dokumnr FROM dok); Tables: CREATE TABLE dok ( dokumnr INTEGER, CONSTRAINT dok_pkey PRIMARY KEY (dokumnr) ); CREATE TABLE rid ( dokumnr INTEGER ); CREATE INDEX rid_dokumnr_idx ON rid (dokumnr); Andrus ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Asyncron replication from wan to lan with PostgreSQL 8
Hi, I have used Daffodil replicator in a similar situation earlier. It is necessary that master and slave database should be exposed to each other. The problem might have occurred due to deployment of firewall in the network that is preventing the connection to establish. The solution could be found out by using routers or any other mean so that master and slave servers are exposed. Regards, Kuldeep kumar ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Encoding Issue with UNICODE
[EMAIL PROTECTED] ("Magnus Naeslund(t)") wrote in message news:<[EMAIL PROTECTED]>... > Fritz Bayer wrote: > > Hello, > > > > I`m using postgresql 7.2.1. According to the following lines data in > > my database gets encoded as unicode. Server and client communication > > seems to use unicode as well: > > > > woody=# select version(); > > version > > --- > > PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.4 > > (1 row) > > > > woody=# select getdatabaseencoding(); > > getdatabaseencoding > > - > > UNICODE > > (1 row) > > > > woody=# show client_encoding; > > NOTICE: Current client encoding is 'UNICODE' > > SHOW VARIABLE > > > > I have a java program, which writes words containing german umlauts > > like äöü into the database. As you probably know, those characters > > belong to the ISO-8859-1 character encoding set. > > > > In my java webapplication those umlauts (äöü) get displayed correctly. > > So they actually get stored correctly in the database. > > > > I know I had to set the charSet option in the connection URL to get > stuff working once: > > "jdbc:postgresql://server/database?charSet=LATIN1" > > Maybe that would work for UNICODE? > As far I have heard the charSet property is ignored by the jdbc drivers. However, somebody patched them an introduced this property. > Regards, > Magnus > > > > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Suitable Database version
Hi All, I am recently planning to use PostgreSQL to be my backend for my Java web based application in JBoss application server. This is to be moved to the production environment. I don't want to change the postgresql version for next few years. So, I want a stable version of postgresql to be used for this. Can anybody tell me the most suitable version for my need. Of course, I am expecting the database to be huge. Thanks in advance Regards Santharam A.B.
Re: [GENERAL] Unique constraint violation on serial column
All, In the pg_dump output is the command: COPY event_tbl (d1, ..., evt_id) FROM stdin; followed by all the data for the table. There are 23040 rows. The last value for evt_id is 23040. So far so good. Then the last statement in the pg_dump output is: SELECT pg_catalog.setval('event_tbl_evt_id_seq', 21232, true); I'm guessing this is my culprit. But this dump is from the client site. Short of somebody mucking with the sequence manually, is there any way that the sequence number could get changed? As I've said, our insert commands are very simple and we do not specify 'evt_id' values directly. Does anybody have any ideas about how this could have gotten out of sync? thanks, Bill --- Alvaro Herrera <[EMAIL PROTECTED]> wrote: > On Tue, Apr 12, 2005 at 08:48:15AM -0700, Bill > Chandler wrote: > > > How does one find out the current sequence value? > Is > > there a way to change it? > > Using the function setval() you can change it. > SELECT * from > sequencename to find out. > > -- > Alvaro Herrera (<[EMAIL PROTECTED]>) > "Si quieres ser creativo, aprende el arte de perder > el tiempo" > __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Unique constraint violation on serial column
On Tue, Apr 12, 2005 at 08:58:41AM -0700, Bill Chandler wrote: > I did not intend to put explicit values in the > 'evt_id' column. I thought the six values in the > insert command correspond to the 1st six columns in > the create table command, namely d1, obj_id, d2, val, > correction and delta and 'evt_id' is set to the > nextval() automagically. Is that not correct? Sorry, I didn't count the fields. My mistake. -- Alvaro Herrera (<[EMAIL PROTECTED]>) Tulio: oh, para qué servirá este boton, Juan Carlos? Policarpo: No, aléjense, no toquen la consola! Juan Carlos: Lo apretaré una y otra vez. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Unique constraint violation on serial column
I did not intend to put explicit values in the 'evt_id' column. I thought the six values in the insert command correspond to the 1st six columns in the create table command, namely d1, obj_id, d2, val, correction and delta and 'evt_id' is set to the nextval() automagically. Is that not correct? Bill --- Alvaro Herrera <[EMAIL PROTECTED]> wrote: > On Tue, Apr 12, 2005 at 07:59:55AM -0700, Bill > Chandler wrote: > > I'm sorry, was working on little sleep yesterday. > You > > are right, the table was created with the columns > in > > the following order: > > > > d1, obj_id, d2, val, correction, delta, evt_id > > > > The insert command looks something like: > > > > INSERT INTO EVENT_TBL VALUES(1039850293991, 145, > > 1039110343000, '10.25', 1, 739950991) > > So you are indeed inserting constant values in your > serial field. (Bad > idea.) Why are you doing that (i.e. why aren't you > using nextval()), > and do you have numbers bigger than the sequence's > current value already > in the table? > > -- > Alvaro Herrera (<[EMAIL PROTECTED]>) > "Uno combate cuando es necesario... ¡no cuando está > de humor! > El humor es para el ganado, o para hacer el amor, o > para tocar el > baliset. No para combatir." (Gurney Halleck) > __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Unique constraint violation on serial column
On Tue, Apr 12, 2005 at 08:48:15AM -0700, Bill Chandler wrote: > How does one find out the current sequence value? Is > there a way to change it? Using the function setval() you can change it. SELECT * from sequencename to find out. -- Alvaro Herrera (<[EMAIL PROTECTED]>) "Si quieres ser creativo, aprende el arte de perder el tiempo" ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Unique constraint violation on serial column
I was able to get a pg_dump of the table in question. It has 23040 rows in it. evt_id column ranges from 1 to 23040. I used the dump to create a new database. All inserts fail with same error (unique constraint violation). However, I am wondering if this is just the result of the fact that the pg_dump output conatins explicit evt_id values for each row added so after creating the database from the pg_dump output the sequence is still at 1 when it should really be at 23040. How does one find out the current sequence value? Is there a way to change it? Bill --- Bill Chandler <[EMAIL PROTECTED]> wrote: > I'm sorry, was working on little sleep yesterday. > You > are right, the table was created with the columns in > the following order: > > d1, obj_id, d2, val, correction, delta, evt_id > > The insert command looks something like: > > INSERT INTO EVENT_TBL VALUES(1039850293991, 145, > 1039110343000, '10.25', 1, 739950991) > > Bill > > --- Ragnar Hafstað <[EMAIL PROTECTED]> wrote: > > On Mon, 2005-04-11 at 11:03 -0700, Bill Chandler > > wrote: > > > > > ERROR: duplicate key violates unique constraint > > > event_tbl_evt_id_key > > > > > EVENT_TBL > > > evt_id bigserial, unique > > > d1 numeric(13) > > > obj_id numeric(6) > > > d2 numeric(13) > > > val varchar(22) > > > correction numeric(1) > > > deltanumeric(13) > > > > and a bit later , in response to a question, > > On Mon, 2005-04-11 at 14:24 -0700, Bill Chandler > > wrote: > > > Tom, > > > > > > This is not the EXACT command (don't have that > > since > > > this a client site and they did not have logging > > > turned on) but the insert command would have > > looked > > > something like: > > > > > > INSERT INTO EVENT_TBL VALUES(1039850293991, > > 'X.Y.Z', > > > 1039110343000, '10.25', 1, 739950991) > > > > firstly, the types do not seem to match the table > > definition. > > > > secondly, you seem to be inserting a literal value > > into your > > serial column. > > > > did you mean to say that the insert was > > INSERT INTO EVENT_TBL (d1,...) VALUES (...) ? > > > > what is the current value of the sequence ? > > are there any rows there evt_id is higher than > that > > ? > > > > gnari > > > > > > > > > > > > __ > Do you Yahoo!? > Yahoo! Small Business - Try our new resources site! > http://smallbusiness.yahoo.com/resources/ > > ---(end of > broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Unique constraint violation on serial column
On Tue, Apr 12, 2005 at 07:59:55AM -0700, Bill Chandler wrote: > I'm sorry, was working on little sleep yesterday. You > are right, the table was created with the columns in > the following order: > > d1, obj_id, d2, val, correction, delta, evt_id > > The insert command looks something like: > > INSERT INTO EVENT_TBL VALUES(1039850293991, 145, > 1039110343000, '10.25', 1, 739950991) So you are indeed inserting constant values in your serial field. (Bad idea.) Why are you doing that (i.e. why aren't you using nextval()), and do you have numbers bigger than the sequence's current value already in the table? -- Alvaro Herrera (<[EMAIL PROTECTED]>) "Uno combate cuando es necesario... ¡no cuando está de humor! El humor es para el ganado, o para hacer el amor, o para tocar el baliset. No para combatir." (Gurney Halleck) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Get Number of milliseconds for an intervall
Sorry, I forgot to say in PL/pgSQL --- Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 18 --- -Original Message-From: FERREIRA William (COFRAMI) [mailto:[EMAIL PROTECTED]Sent: mardi 12 avril 2005 17:35To: '[EMAIL PROTECTED]'; 'pgsql-general@postgresql.org'Subject: RE: [GENERAL] Get Number of milliseconds for an intervall in which language ? -Message d'origine-De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]Envoyé : mardi 12 avril 2005 15:25À : pgsql-general@postgresql.orgObjet : [GENERAL] Get Number of milliseconds for an intervall Hi, I would like to get the elapsed time in milliseconds between 2 dates... Does anyone have an idea of how to do it ? Thanks Patrick --- Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 18 ---
Re: [GENERAL] Get Number of milliseconds for an intervall
in which language ? -Message d'origine-De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]Envoyé : mardi 12 avril 2005 15:25À : pgsql-general@postgresql.orgObjet : [GENERAL] Get Number of milliseconds for an intervall Hi, I would like to get the elapsed time in milliseconds between 2 dates... Does anyone have an idea of how to do it ? Thanks Patrick --- Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 18 ---
Re: [GENERAL] client interfaces
On Tue, Apr 12, 2005 at 10:51:09AM -0400, David Parker wrote: > > Is there anything like a client library for postgres that does not use > tcp/ip as the protocol? As part of the performance tuning of our > application, the question was posed to me whether there was a more > "direct" sort of API that would not require going through the tcp/ip > stack. I assume the answer to this is "no", and I see no hint of > anything in the source code, but I thought I would ask the question, > just to be sure. PostgreSQL supports connections over Unix-domain (AF_UNIX) sockets if your system does. They still use the socket interface, but they're typically faster than a TCP connection. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] client interfaces
"David Parker" <[EMAIL PROTECTED]> writes: > Is there anything like a client library for postgres that does not use > tcp/ip as the protocol? Local connections generally go through Unix sockets, which is not the tcp stack. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Unique constraint violation on serial column
I'm sorry, was working on little sleep yesterday. You are right, the table was created with the columns in the following order: d1, obj_id, d2, val, correction, delta, evt_id The insert command looks something like: INSERT INTO EVENT_TBL VALUES(1039850293991, 145, 1039110343000, '10.25', 1, 739950991) Bill --- Ragnar Hafstað <[EMAIL PROTECTED]> wrote: > On Mon, 2005-04-11 at 11:03 -0700, Bill Chandler > wrote: > > > ERROR: duplicate key violates unique constraint > > event_tbl_evt_id_key > > > EVENT_TBL > > evt_id bigserial, unique > > d1 numeric(13) > > obj_id numeric(6) > > d2 numeric(13) > > val varchar(22) > > correction numeric(1) > > deltanumeric(13) > > and a bit later , in response to a question, > On Mon, 2005-04-11 at 14:24 -0700, Bill Chandler > wrote: > > Tom, > > > > This is not the EXACT command (don't have that > since > > this a client site and they did not have logging > > turned on) but the insert command would have > looked > > something like: > > > > INSERT INTO EVENT_TBL VALUES(1039850293991, > 'X.Y.Z', > > 1039110343000, '10.25', 1, 739950991) > > firstly, the types do not seem to match the table > definition. > > secondly, you seem to be inserting a literal value > into your > serial column. > > did you mean to say that the insert was > INSERT INTO EVENT_TBL (d1,...) VALUES (...) ? > > what is the current value of the sequence ? > are there any rows there evt_id is higher than that > ? > > gnari > > > > __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] client interfaces
Is there anything like a client library for postgres that does not use tcp/ip as the protocol? As part of the performance tuning of our application, the question was posed to me whether there was a more "direct" sort of API that would not require going through the tcp/ip stack. I assume the answer to this is "no", and I see no hint of anything in the source code, but I thought I would ask the question, just to be sure. Thanks. - DAP--David Parker Tazz Networks (401) 709-5130
Re: [GENERAL] Crystal reports 9 fails to recognise data on upgrade to 8.0.1
Looks like your ODBC settings might have changed in the switch. In particular, look at the " text as longvarchar" setting. If you cannot find a solution, try posting to the pgsql-odbc list. "Kristina Magwood" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Hi, > We are updating from v7.3 to v8.0.1 as well as transferring the database > to a new machine. We use Crystal Reports 9 to access the data on a > production basis. Unfortunately, Crystal Reports 9 fails to recognise > some of the data fields on the new database. In particular, it does not > recognise VARCHAR 256 and larger. Any ideas? > Thanks, > Kristina Magwood ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] serial type (text instead of integer) and duplicate keys
Carlos Costa <[EMAIL PROTECTED]> writes: > SELECT oid,xmin,xmax,ctid,id FROM articles WHERE id=391; >oid | xmin| xmax| ctid | id > -+---+---+--+- > 3032469 | 152691945 | 152886684 | (104,6) | 391 > 3032469 | 152886687 | 156995994 | (104,13) | 391 > (2 rows) Could we see cmin,cmax as well? Since the OID is the same, these are evidently two versions of the same row; somehow one of them didn't get marked dead when the other one was created. What patterns of updating do you use on this table? Any SELECT FOR UPDATE? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] What are the consequences of a bad database design (never seen that before !)
On Monday 11 April 2005 05:39, Jinane Haddad wrote: > Hi everyone, > > i just got a new job in a small entreprise and they are using postgres as a > database for their application. I was stupefied cause the database design > is so bad : we can even say it has been done by amateurs. I observed the > following problems till now: > > 1- redondancy ( TOO MUCH) > 2- Many tables for the same object (stupid ex: a table for female_employees > another for males ...) instead of one table (there are cases of 6 tables > for the same one) > 3- Some essential table are inexistant > 4- Null values for critical information > 5- Primary keys of multiple fields (4 or 5 sometimes) du to bad design > ... > > The bottom of the line is that they have been working on the application > for 2 years. Querys are becoming bigger and contains a lot of unions and > "in/not in". The data contained in the database have to be checked often > invalid values may be found ... > You need to figure out *why* they brought you in. If they brought you in because their current "database guru" was just to busy to do database work full time, your going to need to approach things more carefully and make sure to not denegrate any of the previous work.If they brought you in because they recognize that they are starting to have problems, then you can be more straightforward about problems within the schema and better ways to approach things. > My question is with such database, what are the lomg term consequences or > can we determinate them. I know that the querys will become slower, and the > database will grow more quickly ... And a lot of information will not be > trust wise > The two problems that will crop up are performance issues and bad data. > But the people i am working with are not considering the restructuring of > the database. They are even thinking of expanding it by adding new modules. > > Please can someone advise me, or tell me what to do, what may be the > consequences > My advice is to not go to them with the "we need to totally reengineer the schema for the next 6 months so that we have the same functionality we have now" approach. Instead figure out what the next module they want to add is and what parts of the system it will touch upon and then see about reengineering those particular parts of the schema. The bit by bit approach should get them to the same end game with stalling development for the next few months. Make sure to make use of views and stored procedures to help keep backwards compatibility where you can't convince people to do code modifications. HTH. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Question about Large Objects
"Sergey Karin" <[EMAIL PROTECTED]> writes: > As I understood PostgreSQL allows to store large objects 2GB size maximum. > Are there any plans to increase or removing that limitation? I don't think anyone's really thought about it. To do it without breaking backward compatibility, we'd have to invent a parallel 64-bit LO API and propagate that clear out to the clients ... seems like a pain in the neck for relatively small gain. > If no, are there any abilities to store 10-20GB raster data (aero foto > image) in postgreSQL? You could break it into chunks, but it might be better to keep it in the regular filesystem and just store a pathname in the database. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] postgres 8 upgrade problem
Hi, I have just upgraded my (thank God test) database server to Postgres to 8.0.1 from 7.4.7. I do a stand dump, shutdown, upgrade, restore procedure. On the server i have 5 databases. All of them work fine except one where I can "select" fine but an not insert, update or delete!!! I get errors like this... =# insert into dynaconfig (name, value) values ('a', 'g'); ERROR: invalid memory alloc request size 13077039904 Postgresql log file has entries like LOG: server process (PID 24472) was terminated by signal 11 LOG: terminating any other active server processes LOG: all server processes terminated; reinitializing LOG: database system was interrupted at 2005-04-12 14:31:17 BST LOG: checkpoint record is at 1/45B00648 LOG: redo record is at 1/45B00648; undo record is at 0/0; shutdown TRUE LOG: next transaction ID: 4827; next OID: 28627425 LOG: database system was not properly shut down; automatic recovery in progressLOG: record with zero length at 1/45B00690 LOG: redo is not required LOG: database system is ready I can dump the database fine and I can reload it again without errors. All other databases on the server work fine. Any ideas? Regards, Abdul-Wahid
Re: [GENERAL] PostgreSQL 8.0.2 Now Available
No. Clients on other machines should work fine since nothing changed in the wire protocol. On Tuesday 12 April 2005 05:24, Michael Ben-Nes wrote: > I want to be 100% sure. > > Currently my server runing 8.0.1 and my HTTP server ( another computer ) > runing php which compiled on PG 7.4.7 and its working great. > > I need to recompile all the clients on all the other servers ( php / > psql ) including PHP ? > > THanks > > Marc G. Fournier wrote: > > For those already running 8.x on your production servers, please note > > that this upgrade does *NOT* require a dump restore, but due to a bump > > in the major version number for the client library (libpq), it *WILL* > > require all client applications to be recompiled at the same time. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Loosing connection with the database
On Tue, 12 Apr 2005, [UTF-8] Poul Møller Hansen wrote: > I have rewritten the application so every client thread is opening a new > database connection, and yesterday it happened again. > --- > 2005-04-11 12:27:54 ERROR: invalid string enlargement request size > 1358954492 > 2005-04-11 12:27:54 WARNING: AbortTransaction and not in in-progress > state > 2005-04-11 12:27:54 FATAL: invalid frontend message type > 78 > --- > The application is opening a socket listener, and every client > connection opens a new connection to the database. > The clients sends a status message every 2nd minute that are written to > the database. > I'm using Postgresql version 7.4.7 and jdbc driver version > pg74.215.jdbc3.jar. > > Do you have a clue on what's going on ? > No, I don't. Do you have any more information? What is your code doing when it fails? Just issuing a regular query? Are you using any of the less common driver features: Large objects, fastpath api, a COPY patch? If the driver had a protocol problem I would expect it to be rather repeatable. If the driver had a synchronization problem it should have disappeared when you moved to a single thread model. I've attached the test script I've used to try and beat on the driver. Kris Jurkaimport java.sql.*; import org.postgresql.*; import org.postgresql.largeobject.*; public class Threads { public static void main(String args[]) throws Exception { Class.forName("org.postgresql.Driver"); Connection conn = DriverManager.getConnection("jdbc:postgresql://localhost:5432/jurka","jurka",""); conn.setAutoCommit(false); Runner runners[] = new Runner[10]; setupBlob(conn, runners.length); for (int i=0; i ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Get Number of milliseconds for an intervall
Hi, I would like to get the elapsed time in milliseconds between 2 dates... Does anyone have an idea of how to do it ? Thanks Patrick --- Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 18 ---
Re: [GENERAL] serial type (text instead of integer) and duplicate keys
Thank you very much. Here is the output: SELECT oid,xmin,xmax,ctid,id FROM articles WHERE id=391; oid | xmin| xmax| ctid | id -+---+---+--+- 3032469 | 152691945 | 152886684 | (104,6) | 391 3032469 | 152886687 | 156995994 | (104,13) | 391 (2 rows) My version is 7.4.1 And no, I haven't had crashes. And I am using this database for 4-5 years. Some days ago, the partition where my data is was full (just a few seconds), but all the system continued running fine. (This is the only possible cause of corruption I've detected). On Apr 12, 2005 2:00 PM, Richard Huxton wrote: > Carlos Costa wrote: > > Hello all! > > > > There is an extrange error in my logfile: > > > > update articles set online='t' where id = 391 ; > > ERROR: duplicate key violates unique constraint "articles_pkey" > > > > (the error exists only with this id) > > > > I've checked (well, almost imposible) if there was more than one > > article with this id: > > > > select count(*) from articles where id=391 ; > > count > > --- > > 1 > > (1 row) > > > > Then, I checked the table: > > > > Table "public.articles" > > Column | Type | Modifiers > > ---++- > > id| integer| not null default > > nextval('"articles_id_seq"'::text) > > > > Here is the origin of my problem, I think: "text". "text"?. The > > "serial" type generate text instead of integer. Really extrange. > > No - it's saying that 'articles_id_seq' is text. The sequence is > returning a number. > > > So, my next query: > > select id from articles where id like '%391%' ; > > id > > - > > 391 > > 391 > > (2 rows) > > > > The problem is easy to solve: delete and re-create the rows. But I > > would like to know the origin of this error. Any tip? > > You're seeing two copies here because this query doesn't use the index > (you're forcing PG to convert id to text). You should be able to > recreate it using: >SET enable_indexscan=false; >SELECT * FORM articles WHERE id = 391; > In fact, you should do: >SET enable_indexscan=false; >SELECT oid,xmin,xmax,ctid,id FROM articles WEHRE id=391; > This will show some system columns too. If you post the results of this > query, I'm sure one of the developers will be able to identify the issue. > > I'm guessing the unique index has been corrupted somehow. Two questions: > 1. What version of PostgreSQL are you running? > 2. Have you had any crashes? > > If it is the index, a reindex or drop/recreate will solve it, but let's > see what's in the system columns first. > -- >Richard Huxton >Archonet Ltd > -- [ http://www.improveyourweb.com/ ] web.log.about.web.development ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Crystal reports 9 fails to recognise data on upgrade to 8.0.1
Hi, We are updating from v7.3 to v8.0.1 as well as transferring the database to a new machine. We use Crystal Reports 9 to access the data on a production basis. Unfortunately, Crystal Reports 9 fails to recognise some of the data fields on the new database. In particular, it does not recognise VARCHAR 256 and larger. Any ideas? Thanks, Kristina Magwood
Re: [GENERAL] serial type (text instead of integer) and duplicate
Carlos Costa wrote: Hello all! There is an extrange error in my logfile: update articles set online='t' where id = 391 ; ERROR: duplicate key violates unique constraint "articles_pkey" (the error exists only with this id) I've checked (well, almost imposible) if there was more than one article with this id: select count(*) from articles where id=391 ; count --- 1 (1 row) Then, I checked the table: Table "public.articles" Column | Type | Modifiers ---++- id| integer| not null default nextval('"articles_id_seq"'::text) Here is the origin of my problem, I think: "text". "text"?. The "serial" type generate text instead of integer. Really extrange. No - it's saying that 'articles_id_seq' is text. The sequence is returning a number. So, my next query: select id from articles where id like '%391%' ; id - 391 391 (2 rows) The problem is easy to solve: delete and re-create the rows. But I would like to know the origin of this error. Any tip? You're seeing two copies here because this query doesn't use the index (you're forcing PG to convert id to text). You should be able to recreate it using: SET enable_indexscan=false; SELECT * FORM articles WHERE id = 391; In fact, you should do: SET enable_indexscan=false; SELECT oid,xmin,xmax,ctid,id FROM articles WEHRE id=391; This will show some system columns too. If you post the results of this query, I'm sure one of the developers will be able to identify the issue. I'm guessing the unique index has been corrupted somehow. Two questions: 1. What version of PostgreSQL are you running? 2. Have you had any crashes? If it is the index, a reindex or drop/recreate will solve it, but let's see what's in the system columns first. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] serial type (text instead of integer) and duplicate keys
Hello all! There is an extrange error in my logfile: update articles set online='t' where id = 391 ; ERROR: duplicate key violates unique constraint "articles_pkey" (the error exists only with this id) I've checked (well, almost imposible) if there was more than one article with this id: select count(*) from articles where id=391 ; count --- 1 (1 row) Then, I checked the table: Table "public.articles" Column | Type | Modifiers ---++- id| integer| not null default nextval('"articles_id_seq"'::text) Here is the origin of my problem, I think: "text". "text"?. The "serial" type generate text instead of integer. Really extrange. So, my next query: select id from articles where id like '%391%' ; id - 391 391 (2 rows) The problem is easy to solve: delete and re-create the rows. But I would like to know the origin of this error. Any tip? Thanks in advance, Carlos -- [ http://www.improveyourweb.com/ ] web.log.about.web.development ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] PostgreSQL 8.0.2 Now Available
I want to be 100% sure. Currently my server runing 8.0.1 and my HTTP server ( another computer ) runing php which compiled on PG 7.4.7 and its working great. I need to recompile all the clients on all the other servers ( php / psql ) including PHP ? THanks Marc G. Fournier wrote: For those already running 8.x on your production servers, please note that this upgrade does *NOT* require a dump restore, but due to a bump in the major version number for the client library (libpq), it *WILL* require all client applications to be recompiled at the same time. -- -- Canaan Surfing Ltd. Internet Service Providers Ben-Nes Michael - Manager Tel: 972-4-6991122 Cel: 972-52-8555757 Fax: 972-4-6990098 http://www.canaan.net.il -- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Connection closing
Clifton Zama wrote: Hi What would cause a connection to automatically close within a java program.It gets to a point where I have an inner loop that calls a recursive method that creates resultsets.But I am closing all my result sets, so I do not know why the connection simply closes itself.Please help. Turn logging on at the server (or turn the level of detail up). There will be a line saying why the connection closed. Presumably there is a status code available via JDBC too, but the server-side logs will tell you whether PG or your client is closing the connection. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Log File Maintainance
On Tue, 12 Apr 2005, Richard Huxton wrote: Inpreet Singh wrote: Hello, I am working on live server where we have installed postgres database as our back end. But now the problem is due to continues work on postgres, size of log files has become problem for us. And the partition where our postgres exists is full. Postmaster is not working now. I tried to rotate log file by adding lines in postgresql.conf syslog = 2 # range 0-2 syslog_facility = 'LOCAL0' syslog_ident = 'postgres' You also need to update your syslog.conf and restart syslogd Add a line like: local0.* /var/log/pgsql probably, better -/var/log/pgsql Read the man page(s) for details. Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Loosing connection with the database
This sort of thing has been seen to occur when multiple client-side threads try to use the same database connection without proper locking to ensure only one thread uses it at a time. See for example http://archives.postgresql.org/pgsql-hackers/2004-09/msg00104.php This is exactly what I am doing. Must admit I haven't considered that as an issue. For performance reasons I suppose one database connection per client are preferred rather than using synchronized on the db class ? The JDBC driver should be doing any synchronization necessary for multiple threads. Could you be more clear what you are doing? What driver version? Any chance you've got a reproducible example? Kris Jurka I have rewritten the application so every client thread is opening a new database connection, and yesterday it happened again. --- 2005-04-11 12:27:54 ERROR:Â invalid string enlargement request size 1358954492Â 2005-04-11 12:27:54 WARNING:Â AbortTransaction and not in in-progress state 2005-04-11 12:27:54 FATAL:Â invalid frontend message type 78ÂÂÂ --- The application is opening a socket listener, and every client connection opens a new connection to the database. The clients sends a status message every 2nd minute that are written to the database. I'm using Postgresql version 7.4.7 and jdbc driver version pg74.215.jdbc3.jar. I have tried the pg80.310.jdbc3.jar but the datatype inet can't be used with setString ?? Do you have a clue on what's going on ? Poul I found this: http://jdbc.postgresql.org/documentation/80/thread.html As you are saying the jdbc driver should be thread safe, and has been since the first version. The
[GENERAL] Connection closing
Hi What would cause a connection to automatically close within a java program.It gets to a point where I have an inner loop that calls a recursive method that creates resultsets.But I am closing all my result sets, so I do not know why the connection simply closes itself.Please help. Clifton Zama ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Question about Large Objects
Hi, all As I understood PostgreSQL allows to store large objects 2GB size maximum. Are there any plans to increase or removing that limitation? If no, are there any abilities to store 10-20GB raster data (aero foto image) in postgreSQL? Thanks Sergey Karin
Re: [GENERAL] Log File Maintainance
Inpreet Singh wrote: Hello, I am working on live server where we have installed postgres database as our back end. But now the problem is due to continues work on postgres, size of log files has become problem for us. And the partition where our postgres exists is full. Postmaster is not working now. I tried to rotate log file by adding lines in postgresql.conf syslog = 2 # range 0-2 syslog_facility = 'LOCAL0' syslog_ident = 'postgres' You also need to update your syslog.conf and restart syslogd Add a line like: local0.* /var/log/pgsql Read the man page(s) for details. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend