[GENERAL] Postgres crashes,help to recover
I have Postgres 8 running on Windows XP The size of data subdirectory is 326 MB Today morning suddenly one table in one database, firma1.klient is corrupted. When trying to backup it using pgAdmin III I get the log below and backup is not created. I have: 1. Compressed backup of the whole database as of 15.7 2. Compressed backup of the firma1 schema of this database as of 19.7 where corrupted table klient resides. How to get the database back working by repairing firma1.klient table or by restoring this from schema backup. Why Postgres crashes ? I use default postgres.conf file which has probably fsync on Log when trying to backup table: . pg_dump: restoring data for table "klient" pg_dump: dumping contents of table klient pg_dump: ERROR: out of memory DETAIL: Failed on request of size 544565107. pg_dump: SQL command to dump the contents of table "klient" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: out of memory DETAIL: Failed on request of size 544565107. pg_dump: The command was: COPY firma1.klient (kood, nimi, a_a, p_kood, regnr, vatpayno, piirkond, postiindek, tanav, kontaktisi, telefon, faks, email, wwwpage, liik, viitenr, riik, riik2, riigikood, hinnak, erihinnak, myygikood, objekt2, objekt5, objekt7, maksetin, omakseti, krediit, ostukredii, masin, info, maksja, "timestamp", atimestamp, elanikud, pindala, grmaja, apindala, kpindala, idmakett, tulemus, omandisuhe, username, changedby, parool, hinnaale, mitteakt, kontakteer, klikaart, mhprotsent, aadress, grupp, verskp, firma_enne, tegevusala, instkuupae, firmarv, tookohti, versioon, teenlepkp, jur, kasutab, est, rus, miniest, minirus, plakat, keel) TO stdout; pg_dump: *** aborted because of error Process returned exit code 1. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Administration of raster data
Hey List, due to the PostGIS extension PostgreSQL is THE free DBS for geospatial data. I know of its capabilities to handle vector data and like to know if PostgreSQL can also be used to administer geospatial raster data. I would be thankful for any information and experiences in this field of application. Regards Axel Orth ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Problems compiling Postgresql 8.0.3 on 10.4
Hi, i have just installed 10.4 on one of our machines and cannot get past this error during make gcc -no-cpp-precomp -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -fno-strict-aliasing -dynamiclib -install_name /usr/local/pgsql/lib/libpq.4.dylib -compatibility_version 4 -current_version 4.0 -multiply_defined suppress fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o pqsignal.o fe-secure.o md5.o ip.o wchar.o encnames.o noblock.o pgstrcasecmp.o thread.o -L../../../src/port -lresolv -o libpq.4.0.dylib /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lSystem is not an object file (not allowed in a library) make[3]: *** [libpq.4.0.dylib] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all] Error 2 make: *** [all] Error 2 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgres crashes,help to recover
Andrus wrote: I have Postgres 8 running on Windows XP The size of data subdirectory is 326 MB Today morning suddenly one table in one database, firma1.klient is corrupted. When trying to backup it using pgAdmin III I get the log below and backup is not created. What caused this? Presumably you had a power/system-failure or similar? I have: 1. Compressed backup of the whole database as of 15.7 2. Compressed backup of the firma1 schema of this database as of 19.7 where corrupted table klient resides. Do you mean file-level backups, or backups taken using pg_dump/pgadmin? You can't take file-backups of anything less than the entire data directory. How to get the database back working by repairing firma1.klient table or by restoring this from schema backup. If your backup is recent enough, that's probably the quickest route. Why Postgres crashes ? I use default postgres.conf file which has probably fsync on *WHEN* did Postgresql crash, originally that is? This error was caused by something - when did something go horribly wrong? And do your disks honour the fsync? Was a power failure the cause of this? Log when trying to backup table: . pg_dump: restoring data for table "klient" pg_dump: dumping contents of table klient pg_dump: ERROR: out of memory DETAIL: Failed on request of size 544565107. Well - unless you have a piece of data that's 544MB that certainly looks like corruption. It's entirely possible you can identify the row that's causing this problem and dump all the data either side of it. However, if your backup is good, then I'd just restore that. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Postgres crashes,help to recover
>> Today morning suddenly one table in one database, firma1.klient is >> corrupted. When trying to backup it using pgAdmin III I get the log >> below and backup is not created. > > What caused this? Presumably you had a power/system-failure or similar? Windows XP does not respond and I pressed reset key yesterday evening. However, after that the database continues working yesterday. I use default postgres.conf file created by installer. In my knowledge this crash does not occur. I'm very intresting about reasons of this crash. I created copy of the whole data directory. running select * from firma1.klient; from pgAdmin yields: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. in windows eventlog this writes: LOG: checkpoint record is at 0/4FD3ECB8 LOG: all server processes terminated; reinitializing WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. LOG: terminating any other active server processes LOG: server process (PID 2756) was terminated by signal 5 >> I have: >> >> 1. Compressed backup of the whole database as of 15.7 >> 2. Compressed backup of the firma1 schema of this database as of 19.7 >> where corrupted table klient resides. > > Do you mean file-level backups, or backups taken using pg_dump/pgadmin? I have compressed backups created using pgadmin. Whole database backup (database contains two, andmed and firma1 schemas) is from 15.7 and fresh, firma1 schema backup is form 19.7 >> How to get the database back working by repairing firma1.klient table or >> by restoring this from schema backup. > > If your backup is recent enough, that's probably the quickest route. My problem is that I have whole backup only as 15.7 evening I have the current backup (as 19.7 evening) only firma1 schema. Corrupted table klient resides in firma1 schema. I need to restore firma1 backup as of 19.7 to the new database created from 15.7 backup. Unfortunately, firma1 schema is cross-referenced with other schema (andmed) So I have no idea how to restore. Is it possible to convert compressed backup file to plain text or to get data from it ? >> Why Postgres crashes ? I use default postgres.conf file which has >> probably fsync on > > *WHEN* did Postgresql crash, originally that is? This error was caused by > something - when did something go horribly wrong? Yesterday evening I pressed the reset button because windows task manager stops responding ( By experimenting with setforegroundwindow Windows API call I ran 20 copies of charmap.exe and tried to kill them all from task manager). However, after re-booting computer database continues working yesterday. I also restored new database yesterday with 500 tables to this cluster. > And do your disks honour the fsync? Was a power failure the cause of this? I have usual office PC using Quantum FireballP LM20.5 20 GB IDE HDD with XP drivers. How to determine is fsync working or not ? I use default postgres.conf file ( added only listen_addresses = '*' ) >> Log when trying to backup table: >> >> . > Well - unless you have a piece of data that's 544MB that certainly looks > like corruption. Tables are small. Whole data directory (including wal segments and 2 other nonimportant databases) sizes is about 350 MB. > It's entirely possible you can identify the row that's causing this > problem and dump all the data either side of it. However, if your backup > is good, then I'd just restore that. I have up-to date backup of firma1 schema only. Whole backup is a bit old. Is it possible to dump the corrupted table, truncate it and re-load it? I think thank referential integrity is not checked in truncate and refrential integrity does not prevent loading this table. Will truncate command fix the corrupted table? Andrus. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Postgres crashes,help to recover
It seems that corrupted table klient contains data from some other table. select * from firma1.klient limit 3686; Seems to return all data from table. Starting at row 3687 table contains data from other table. VACUUM command returns: INFO: vacuuming "firma1.klient" ERROR: invalid page header in block 1639 of relation "klient" ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Creating a database with psql
Hi, i want to automate the startup of my database. So i have to create first an user and a database, second the schema, all with psql. But how can i create a database with psql if no database exist ? Executing psql, i must define the options for a database and an user with password. Best Regards, Josef Springer ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgres crashes,help to recover
klient table has oid 66079 file 66079 size is 13 MB in correct copy klient table file size is 5MB it seems that other table, nomenkla oid=65783 is added to the end of klient table. How to repair file 66079 so that it contains only 3686 rows from beginning ? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Creating a database with psql
Hi, 1. connect template1 database with your user. 2. create database YOURDB 3. "\c YOURDB" will connect to new database 4. create table etc. linimi On 7/20/05, Josef Springer <[EMAIL PROTECTED]> wrote: > Hi, > > i want to automate the startup of my database. So i have to create first > an user and a database, second the schema, all with psql. But how can i > create a database with psql if no database exist ? Executing psql, i > must define the options for a database and an user with password. > > Best Regards, > Josef Springer > > > > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [HACKERS] Problems compiling Postgresql 8.0.3 on 10.4
Jamie, That's strange, I have 8.03 building fine on 10.4 ? Did you simply do a make without a make clean and run configure again ? Dave On 20-Jul-05, at 3:37 AM, Jamie Deppeler wrote: Hi, i have just installed 10.4 on one of our machines and cannot get past this error during make gcc -no-cpp-precomp -O2 -Wall -Wmissing-prototypes -Wpointer-arith - Wendif-labels -fno-strict-aliasing -dynamiclib -install_name /usr/ local/pgsql/lib/libpq.4.dylib -compatibility_version 4 - current_version 4.0 -multiply_defined suppress fe-auth.o fe- connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o pqsignal.o fe-secure.o md5.o ip.o wchar.o encnames.o noblock.o pgstrcasecmp.o thread.o -L../../../ src/port -lresolv -o libpq.4.0.dylib /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lSystem is not an object file (not allowed in a library) make[3]: *** [libpq.4.0.dylib] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all] Error 2 make: *** [all] Error 2 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Creating a database with psql
On Wed, Jul 20, 2005 at 11:28:23AM +0200, Josef Springer wrote: > i want to automate the startup of my database. So i have to create first > an user and a database, second the schema, all with psql. But how can i > create a database with psql if no database exist ? When you first install PostgreSQL, you run initdb to initialize a cluster. This creates a few default databases and a database superuser. Then you start the backend and make connections to it using psql or some other client. See the "Server Administration" part of the documentation for more information: http://www.postgresql.org/docs/8.0/static/admin.html -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] RAMFS with Postgres
On Tue, 2005-07-19 at 16:45 +, vinita bansal wrote: > Hi, > > I am trying RAMFS solution with Postgres wherein I am pushing the most > heavily used tables in RAM. Why? I mean, what problem are you trying to solve? > I have 32GB RAM on a 64 bit opteron machine. My database size is 40GB. I > think Linux allows max. of 16GB (half of available RAM) to be used directly > to push tables to it. > > I am concerned about reliabilty here (what if there is a power failure). > What are the things that need to be considered and what all can be done to > ensure that there is no data loss in case something goes wrong. What steps > must be taken to ensure data recovery. I am planning to use Slony > replication to replicate my database to a diff node so that incase something > goes wrong, I can restore it from replication node and start my runs on that > data again. The only problem here is that I need to run engines from > beginning. Is there any other way of doing the same thing or such a thing is > good enough given the fact that a failure like this happens very rarely. The > most imp. thing for me is the **data** which should not be lost under any > circumstances. Then don't use RAMFS. Slony may be a good idea, but it's hard to tell if you don't provide more info. What is the database used for? - heavy long running, CPU-based, read only queries? - many simple queries but over the whole dataset (thus I/O based)? - many INSERTs/UPDATEs? Is the database accessed by many concurrent users? How many of them are mostly read-only and how many perform writes? Each problem in each scenario may have a different solution... > Has anyone used Slony replication before. How good is it. Is there anything > else available which is better then Slony Replication? "better" is meaningless w/o a context. There are tasks in which Slony may the best tool in the world, and others that require a totally different approach. First you have to define what your problem is, and why the obvious solution (a normal PostGreSQL server, with a standard filesystem) does not work/fit. Then you choose a solution. > > Regards, > Vinita Bansal .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgres crashes,help to recover
> > And do your disks honour the fsync? Was a power failure the > cause of this? > > > I have usual office PC using Quantum FireballP LM20.5 20 GB > IDE HDD with > XP drivers. > How to determine is fsync working or not ? > I use default postgres.conf file ( added only listen_addresses = '*' ) What version exactly was this? There was some changes in 8.0.2 in this area on win32. The default sync method was also changed at this point. What's the output of "show wal_sync_method"? Finally, go into device manager, find your disk, get properties, look under Policies, is the box for "Enable write caching on the disk" checked? //Magnus ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgres crashes,help to recover
> What version exactly was this? There was some changes in 8.0.2 in this > area on win32. The default sync method was also changed at this point. "PostgreSQL 8.0.2 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special)" > What's the output of "show wal_sync_method"? "open_datasync" > Finally, go into device manager, find your disk, get properties, look > under Policies, is the box for "Enable write caching on the disk" > checked? It is checked. Does Postgres require this to be unchecked ? It is difficult to force customers to change it. Andrus. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] development snapshots old
Dear developers, I am not subsribed to the hacker's list, so I submit my request to this list: please update the contents dev subdirectory at the ftp server. The snapshots therein are from mid-June, whereas the docs snapshot is recent enough. Thanks Zoltan __ http://www.email.azet.sk - 2 000 MB na Vase e-maily! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Postgres crashes,help to recover
> > What version exactly was this? There was some changes in > 8.0.2 in this > > area on win32. The default sync method was also changed at > this point. > > "PostgreSQL 8.0.2 on i686-pc-mingw32, compiled by GCC gcc.exe > (GCC) 3.4.2 (mingw-special)" There we go. That explains it - you have the new code. > > What's the output of "show wal_sync_method"? > > "open_datasync" This setting is only safe if you disable write cache. > > Finally, go into device manager, find your disk, get > properties, look > > under Policies, is the box for "Enable write caching on the disk" > > checked? > > It is checked. > > Does Postgres require this to be unchecked ? It is difficult > to force customers to change it. No, doesn't erquire it. There are a couple of different scenarios: 1) Box is checked. wal_sync_method=open_datasync. This may cause data loss! 2) Box is checked. wal_sync_method=fsync_writethrough. This is safe. 3) Box is unchecked. wal_sync_method=open_datasync. This is safe. 4) Box is unchecked. wal_sync_method=fsync_writethrough. This is safe. In general I would say that 1 is of course the fastest, but it's not safe. 3 should normally be the fastest if the data is on a disk that's only used by postgresql. 2 is probably faster if you have other applications that also write data to the same disk. 4 is probably *never* fastest :-) This all assumes you don't have a battery backed cache. If you have a controller with battery backed cache, 1 should still be the fastest, but now it's suddenly safe. (The basics of these changes are documented in the release notes at http://www.postgresql.org/docs/8.0/static/release-8-0-2.html) //Magnus ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Quotation marks in queries
Tony Caduto wrote: The easiest solution is just not to use caps or spaces in your table/object names, there is no advantage to doing so. People just need to get over the fact that having caps in a name make it easier to read. My Test Table should be my_test_table, the naming makes no difference to the application using the table. Same thing with ordering of fields in a table, it makes no difference other than for looks if the fields are in the order you want them to be in. It is much more of a pain to qoute your sql than it is to have it look nice. Just my 2 cents on the subject. The problem we have is that we want to migrate to postgresql from our current sql server db, but the problem with caps requiring quotes around them makes this a far from easy migration. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Vacuumdb question
Hi, pgsql-general. Explain me please the difference between Vacuum full and Vacuum freeze -- Have a nice day! go mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Vacuumdb question
go schrieb: > Hi, pgsql-general. > > Explain me please the difference between > Vacuum full and Vacuum freeze > See you: http://www.postgresql.org/docs/8.0/interactive/sql-vacuum.html -- Mario Günterberg mattheis. werbeagentur IT Engineer / Projektleiter Zillestrasse 105a. D - 10585 Berlin Tel#49-(0)30 . 34 80 633 - 0 Fax#49-(0)30 . 34 80 633 50 http://www.mattheis-berlin.de Wenn Sie glauben, dies sei ein großes Problem - warten Sie mal ab, bis wir versuchen die Lösung zu finden. (Walter Matthau) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [HACKERS] Problems compiling Postgresql 8.0.3 on 10.4
On Jul 20, 2005, at 5:58 AM, Dave Cramer wrote: Hi, i have just installed 10.4 on one of our machines and cannot get past this error during make /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lSystem is not an object file (not allowed in a library) Install the new Xcode. that should fix it. -- Jeff Trout <[EMAIL PROTECTED]> http://www.jefftrout.com/ http://www.stuarthamm.net/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Custom DateStyle
Is it possible to add a custom datestyle? I am migrating from sybase to postgres and the datetime/timestamp formats are different, some of the existing code depends on that format. I thought the easiest way would be to use a custom datestyle, but I couldn't find out how to add one (if thats possible). Does anyone have any better ideas? Thanks in advance, Amir ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Wishlist?
Hi everyone! I'd like to post a few features I'd like to see in coming releases. Does anyone know about a wishlist newsgroup or web page, or whoat is the proper way to propose such requests? Thanks, Ezequiel Tolnay ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Changes to not deferred FK in 8.0.3 to 7.4?
Am Dienstag, 19. Juli 2005 15:40 schrieb Tom Lane: > Janning Vygen <[EMAIL PROTECTED]> writes: > > On more related question: > > I updated pg_trigger and pg_constraint and changed all my FK: > > > > UPDATE pg_trigger > > SET > > tgdeferrable = true, > > tginitdeferred = true > > WHERE tgconstrname LIKE 'fk_%' > > ; > > > > UPDATE pg_constraint > > SET > > condeferrable = true, > > condeferred = true > > WHERE conname LIKE 'fk_%' > > ; > > No, only the triggers that are for checks should be marked > deferrable/deferred. These are the ones using functions > RI_FKey_check_ins > RI_FKey_check_upd > RI_FKey_noaction_del > RI_FKey_noaction_upd > You want the others nondeferrable because (a) that's the standard > behavior and (b) it'll ensure that the actions happen before the > checks are made. ok thanks. i do it now like this: UPDATE pg_trigger SET tgdeferrable = true, tginitdeferred = true WHERE tgconstrname LIKE 'fk_%' AND tgfoid IN ( SELECT oid FROM pg_proc WHERE proname IN ( 'RI_FKey_check_ins', 'RI_FKey_check_upd', 'RI_FKey_noaction_del', 'RI_FKey_noaction_upd') ) ; UPDATE pg_constraint SET condeferrable = true, condeferred = true WHERE conname LIKE 'fk_%' ; COMMIT; This should work i hope, but i feel a little bit unsure if hacking the pg_catalog is a good way to do it. Maybe I should have take the long, but secure way by modifying the schema with ddl statements. kind regards, janning ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgres crashes,help to recover
>> "open_datasync" > > This setting is only safe if you disable write cache. Thanks you for explanation. I expected that default installation does not cause data loss in any maschine configuration. I don't remember was write cache enabling Windows default setting or was it set by me. Andrus. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Slow first query
On Tue, 2005-07-19 at 16:31, Rob Brenart wrote: > Richard Huxton wrote: > > > Rob Brenart wrote: > > > >> I'm running postgresql 8.0 under WindowsXP for a development box (the > >> production server isn't up yet, and will be linux, so I don't know > >> that the problem will travel)... > >> > >> Using a PHP connection to it for a simple web app, if I leave the app > >> alone for a bit (about 20 seconds I'd say) and do something that > >> requires a query the next page load is slow, not 20 seconds slow, but > >> 3-5 seconds but as long as I stay active the page reloads are > >> instantaneous. > >> > >> Is there anything I can do about this, or should I just wait to see > >> how it performs on my production server before I worry too much? > > > > > > Since you don't say what's causing it, difficult to say what the > > problem is. Might it just be caching, does your development box have a > > lot of memory free to cache disk blocks? > > > > Otherwise, it could be almost anything. You'll have to run some > > timing/traces on the various elements of your setup (php/webserver/pg). > > > Yeah, I didn't say what's causing it because as of yet I haven't had > time to do any serious analysis on it... I thought I'd throw it out > there and see if anyone else had experience with it first. > > My guess is it's exactly what you're saying, some component or another > is getting tossed from RAM to disk and slowing things down. Also note that there are settings in Windows workstation that favor higher performance for the interactive graphical environment over the performance of server type processes. So, if you've got a lot of memory being used by your GUI and desktop apps, this is completely normal behaviour in the workstation version of Windows, and the fix is to deploy to a server version. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Quotation marks in queries
# [EMAIL PROTECTED] / 2005-07-20 12:43:48 +0100: > Tony Caduto wrote: > >The easiest solution is just not to use caps or spaces in your > >table/object names, there is no advantage to doing so. > >People just need to get over the fact that having caps in a name make it > >easier to read. > > > >My Test Table should be my_test_table, the naming makes no difference > >to the application using the table. > > > >Same thing with ordering of fields in a table, it makes no difference > >other than for looks if the fields are in the order you want them > >to be in. > > > >It is much more of a pain to qoute your sql than it is to have it look > >nice. > > > >Just my 2 cents on the subject. > > > > The problem we have is that we want to migrate to postgresql from our > current sql server db, but the problem with caps requiring quotes around > them makes this a far from easy migration. Just so that it doesn't look like your problems are caused by PostgreSQL: it is in accordance with SQL:1999, with the exception that SQL says "fold to uppercase", but PostgreSQL folds to lowercase. IOW, if you require that "table" <> "Table" <> "TABLE", then SQL is the wrong langaue. -- How many Vietnam vets does it take to screw in a light bulb? You don't know, man. You don't KNOW. Cause you weren't THERE. http://bash.org/?255991 ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] on delete rules on a view problem
Hi all, I'am a bit puzzled. I have a view and some delete rules defined on it. Now my problem is, only one gets executed (the first one) and the other ones seem not to be executed at all. I use a subselect within those rules (which works fine for the first one by the way) and I wonder if this could be a problem? Are there any caveats? Is it important which of theses rules is the do instead rule? Are there any things to be considered with using delete rules? I don't find any exhaustive documentation on the internet (except pg manual), so any pointers will be appreciated. Many thanks Jürgen ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Slow Inserts on 1 table?
I have one particular insert query that is running orders of magnitude slower than other insert queries, and I cannot understand why. For example, Inserts into "conceptProperty" (detailed below) are at least 5 times faster than inserts into "conceptPropertyMultiAttributes". When I am running the inserts, postmaster shows as pegging one CPU on the Fedora Core 3 server it is running on at nearly 100%. Any advice is appreciated. Here is a lot of info that may shed light on the issue to someone with more experience than me: Example Insert Query with data: INSERT INTO conceptPropertyMultiAttributes (codingSchemeName, conceptCode, propertyId, attributeName, attributeValue) VALUES ('NCI MetaThesaurus', 'C005', 'T-2', 'Source', 'MSH2005_2004_10_12') EXPLAIN ANALYZE output: QUERY PLAN Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.005..0.008 rows=1 loops=1) Total runtime: 4.032 ms Table Structure: CREATE TABLE conceptpropertymultiattributes ( codingschemename character varying(70) NOT NULL, conceptcode character varying(100) NOT NULL, propertyid character varying(50) NOT NULL, attributename character varying(50) NOT NULL, attributevalue character varying(250) NOT NULL ); Primary Key: ALTER TABLE ONLY conceptpropertymultiattributes ADD CONSTRAINT conceptpropertymultiattributes_pkey PRIMARY KEY (codingschemename, conceptcode, propertyid, attributename, attributevalue); Foreign Key: ALTER TABLE ONLY conceptpropertymultiattributes ADD CONSTRAINT f FOREIGN KEY (codingschemename, conceptcode, propertyid) REFERENCES conceptproperty(codingschemename, conceptcode, propertyid); Structure of Table Referenced by Foreign Key: CREATE TABLE conceptproperty ( codingschemename character varying(70) NOT NULL, conceptcode character varying(100) NOT NULL, propertyid character varying(50) NOT NULL, property character varying(250) NOT NULL, "language" character varying(32), presentationformat character varying(50), datatype character varying(50), ispreferred boolean, degreeoffidelity character varying(50), matchifnocontext boolean, representationalform character varying(50), propertyvalue text NOT NULL ); Primary Key: ALTER TABLE ONLY conceptproperty ADD CONSTRAINT conceptproperty_pkey PRIMARY KEY (codingschemename, conceptcode, propertyid); Thanks, Dan ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Quotation marks in queries
On Wed, Jul 20, 2005 at 12:43:48PM +0100, teknokrat wrote: > The problem we have is that we want to migrate to postgresql from our > current sql server db, but the problem with caps requiring quotes around > them makes this a far from easy migration. The rule is pretty much, either always quote or never quote. Once you start mixing and matching you're likely to get screwed. So, if you don't use quotes in the CREATE TABLE statement, you'll never have to quote anywhere else either... Have this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpWyOHG1kKIp.pgp Description: PGP signature
Re: [GENERAL] Quotation marks in queries
well, you could always write a little function that would loop through every object and then rename to lower case and replace all spaces with _ underscores. Then in your application code just rename everything accordingly. Once again, even if you are using M$ SQL server it would be a good idea to avoid filenames with caps and spaces, I guess just keep that in mind for future projects. I have converted large access databases where the users just used whatever the heck they wanted just by renaming everything and it did not take that long. It would be worth it in the long run to do. Good luck with your conversion. Tony Caduto http://www.amsoftwaredesign.com Home of PG Lighting Admin for Postgresql 8.x teknokrat wrote: The problem we have is that we want to migrate to postgresql from our current sql server db, but the problem with caps requiring quotes around them makes this a far from easy migration. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Custom DateStyle
On Tue, Jul 19, 2005 at 11:51:55 -0400, Amir Tahvildaran <[EMAIL PROTECTED]> wrote: > Is it possible to add a custom datestyle? > > I am migrating from sybase to postgres and the datetime/timestamp > formats are different, some of the existing code depends on that > format. I thought the easiest way would be to use a custom datestyle, > but I couldn't find out how to add one (if thats possible). Does anyone > have any better ideas? You can use to_char to convert dates to strings in custom formats. That may do what you want. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Quotation marks in queries
That's a really good point about the create table, you can actually just rename everything in you create table statements before you actually do any data import, then as long as the fields are in the same physical order(does not matter if the names are different) you can output data from the source system as tab delimited and then use the postgresql copy command to import the tab delimited file to postgresql. Tony Caduto http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql 8.x So, if you don't use quotes in the CREATE TABLE statement, you'll never have to quote anywhere else either... Have this helps, ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Wishlist?
On Wed, Jul 20, 2005 at 02:44:19PM +1000, Ezequiel Tolnay wrote: > Hi everyone! I'd like to post a few features I'd like to see in coming > releases. Does anyone know about a wishlist newsgroup or web page, or > whoat is the proper way to propose such requests? http://www.postgresql.org/developer/roadmap Note especially the part about developers scratching their own itch. You can request all you want, but unless a number of developers agree it's a good idea it probably won't make it to the TODO. And stuff can get pulled from the TODO without actually being done. You'll have better luck getting something added if you're willing to commit to developing it (or pay someone else to). Having said all that, if you want to just request stuff (that's not already on the TODO), you can just post it here and it might get on the TODO if there's enough interest. If you want to discuss specifics, -hackers is probably a better place. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Slow Inserts on 1 table?
What indexes are defined on both tables? Are there any triggers or rules? On Wed, Jul 20, 2005 at 09:50:54AM -0500, Dan Armbrust wrote: > I have one particular insert query that is running orders of magnitude > slower than other insert queries, and I cannot understand why. > For example, Inserts into "conceptProperty" (detailed below) are at > least 5 times faster than inserts into "conceptPropertyMultiAttributes". > > When I am running the inserts, postmaster shows as pegging one CPU on > the Fedora Core 3 server it is running on at nearly 100%. > > Any advice is appreciated. Here is a lot of info that may shed light on > the issue to someone with more experience than me: > > Example Insert Query with data: > INSERT INTO conceptPropertyMultiAttributes (codingSchemeName, > conceptCode, propertyId, attributeName, attributeValue) VALUES ('NCI > MetaThesaurus', 'C005', 'T-2', 'Source', 'MSH2005_2004_10_12') > > EXPLAIN ANALYZE output: > QUERY PLAN > Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.005..0.008 > rows=1 loops=1) > Total runtime: 4.032 ms > > Table Structure: > CREATE TABLE conceptpropertymultiattributes ( >codingschemename character varying(70) NOT NULL, >conceptcode character varying(100) NOT NULL, >propertyid character varying(50) NOT NULL, >attributename character varying(50) NOT NULL, >attributevalue character varying(250) NOT NULL > ); > > Primary Key: > ALTER TABLE ONLY conceptpropertymultiattributes >ADD CONSTRAINT conceptpropertymultiattributes_pkey PRIMARY KEY > (codingschemename, conceptcode, propertyid, attributename, attributevalue); > > Foreign Key: > ALTER TABLE ONLY conceptpropertymultiattributes >ADD CONSTRAINT f FOREIGN KEY (codingschemename, conceptcode, > propertyid) REFERENCES conceptproperty(codingschemename, conceptcode, > propertyid); > > > Structure of Table Referenced by Foreign Key: > CREATE TABLE conceptproperty ( >codingschemename character varying(70) NOT NULL, >conceptcode character varying(100) NOT NULL, >propertyid character varying(50) NOT NULL, >property character varying(250) NOT NULL, >"language" character varying(32), >presentationformat character varying(50), >datatype character varying(50), >ispreferred boolean, >degreeoffidelity character varying(50), >matchifnocontext boolean, >representationalform character varying(50), >propertyvalue text NOT NULL > ); > > Primary Key: > ALTER TABLE ONLY conceptproperty >ADD CONSTRAINT conceptproperty_pkey PRIMARY KEY (codingschemename, > conceptcode, propertyid); > > Thanks, > > Dan > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match > -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?" ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] on delete rules on a view problem
On Wed, Jul 20, 2005 at 16:49:26 +0200, "Rose, Juergen" <[EMAIL PROTECTED]> wrote: > Hi all, > > I'am a bit puzzled. I have a view and some delete rules defined on it. Now my > problem is, only one gets executed (the first one) and the other ones seem > not to be executed at all. I use a subselect within those rules (which works > fine for the first one by the way) and I wonder if this could be a problem? > Are there any caveats? Is it important which of theses rules is the do > instead rule? > > Are there any things to be considered with using delete rules? I don't find > any exhaustive documentation on the internet (except pg manual), so any > pointers will be appreciated. I have seen similar reports in the past. In tht case the problem was that after rows were removed from one table, the corresponding rows from the other table could not be found because they were selected using a join and the rows they were supposed to match up with were gone now. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Trigger problem
Hi, I'll be trying to write a my first trigger which fire when an user makes an insert into a table; this insert records in others tables. But when I try to insert a record in trigger's table generate an error: - Syntax error near "into" in char 9. - This is the trigger. CREATE OR REPLACE FUNCTION socios_insert() RETURNS "trigger" AS $BODY$ BEGIN -- Creo el domicilio mltiple INSERT INTO DomiciliosMultiples (CodigoProfesion, MatriculaProfesional, NumeroSecuencia, CodigoProfesionCliSanHosp, MatriculaProfesionalCliSanHosp, Calle, Numero, Piso, Dto, Otros, Telefono, CodigoLocalidad, HorariosAtencion, NombreInstitucion, HabilitacionAMR, HabilitacionColegio, ExtensionHabilitacion, Titular, CodigoSeccional, CodigoZona) VALUES (NEW.CodigoProfesion, NEW.MatriculaProfesional, 1, 0, 0, NEW.CalleConsultorio, NEW.NumeroConsultorio, NEW.PisoConsultorio, NEW.DtoConsultorio, NEW.OtrosConsultorio, NEW.TelefonoConsultorio, NEW.CodigoLocalidadConsultorio, '', '', FALSE, FALSE, FALSE, FALSE, 0, 99); -- Creo las redes por defecto INSERT INTO RedDomicilioMultiple (CodigoDomicilioMultiple, CodigoRed, FechaInicioVigencia) VALUES (CURRVAL('domiciliosmultiples_codigodomiciliomultiple_seq'), 2, NEW.FechaIngresoCTM); INSERT INTO RedDomicilioMultiple (CodigoDomicilioMultiple, CodigoRed, FechaInicioVigencia) VALUES (CURRVAL('domiciliosmultiples_codigodomiciliomultiple_seq'), 13, NEW.FechaIngresoCTM); INSERT INTO RedDomicilioMultiple (CodigoDomicilioMultiple, CodigoRed, FechaInicioVigencia) VALUES (CURRVAL('domiciliosmultiples_codigodomiciliomultiple_seq'), 99, NEW.FechaIngresoCTM); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE; CREATE TRIGGER tsocios_insert_after AFTER INSERT ON socios FOR EACH ROW EXECUTE PROCEDURE socios_insert(); Thanks, Alejandro ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Trigger problem
On Wed, Jul 20, 2005 at 03:00:42PM -0300, Alejandro D. Burne wrote: > Hi, I'll be trying to write a my first trigger which fire when an user > makes an insert into a table; this insert records in others tables. > But when I try to insert a record in trigger's table generate an error: > - Syntax error near "into" in char 9. - The INSERT statement itself might be at fault instead of the trigger function. Please post the INSERT statement and the complete, exact error message (there are probably multiple lines). If that doesn't tell us what's wrong, then we'll need to see a self-contained example, i.e., all SQL statements that somebody could execute in an empty database to reproduce the problem. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] index row size exceeds btree maximum, 2713 - Solutions?
Dan Armbrust wrote: I'm trying to load some data into PostgreSQL 8.0.3, and I got the error message "index row size 2904 exceeds btree maximum, 2713". After a bunch of searching, I believe that I am getting this error because a value that I am indexing is longer than ~ 1/3 of the block size - or the BLCKSZ variable in the src/include/pg_config_manual.h file. Am I correct so far? I need to fix this problem. I cannot change the indexed columns. I cannot shorten the data value. And I cannot MD5 it, or any of those hashing types of solutions that I saw a lot while searching. Is there a variable I can set somewhere, so that postgresql would just truncate the value to the max length that the index can handle when it goes to enter it into the index, instead of failing with an error? I would be fine with not having this particular row fully indexed, so long as I could still retrieve the full data value. The other solution that I saw was to modify the BLCKSZ variable. From what I saw, it appears that to change that variable, I would need to dump my databases out, recompile everything, and then reload them from scratch. Is this correct? Currently the BLCKSZ variable is set to 8192. What are the performance/disk usage/other? implications of doubling this value, to 16384? Any other suggestions in dealing with this problem? Thanks, Dan Thanks for all the information and ideas WRT this issue. I ended up just having to remove the index from this particular column that was having the issue - in my particular case, I didn't lose anything by doing this anyway, because the index wasn't being used for its intended purpose anyway, due to case sensitivity issues. Could I suggest adding this error, its causes, and possible solutions from this thread (http://archives.postgresql.org/pgsql-general/2005-07/msg00731.php) to the FAQ? It took me a long time to connect all the dots through a lot of different e-mail threads. Also, maybe the max index size should be documented in the manual as well? Dan -- Daniel Armbrust Biomedical Informatics Mayo Clinic Rochester daniel.armbrust(at)mayo.edu http://informatics.mayo.edu/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] help: production db stuck in startup mode
I have a 7.4.6 db running on itanium hpux 11.23. It appears a vacuum ran out of disk space, 2005-07-20 13:47:43 [6161] ERROR: DBI::do failed for SQL [VACUUM VERBOSE] to database clincomm: DBI error: PANIC: could not write to file "/users/postgresql-7.4.6/data/pg_xlog/xlogtemp.6178": No space left on device Pgsql was unresponsive, so we sent SIGINT, then SIGQUIT, both with no response, so finally, sent SIGKILL, then used ipcrm to clear the shm and semaphores. Then restarted, but upon restart, it's stuck in "FATAL: the database system is starting up" mode. Any clues??? Thanks, Ed ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] help: production db stuck in startup mode
On Wednesday July 20 2005 1:13 pm, Ed L. wrote: > I have a 7.4.6 db running on itanium hpux 11.23. It appears a > vacuum ran out of disk space, Oh, and of course, we created extra diskspace before restarting... Ed > > 2005-07-20 13:47:43 [6161] ERROR: DBI::do failed for SQL > [VACUUM VERBOSE] to database clincomm: DBI error: PANIC: > could not write to file > "/users/postgresql-7.4.6/data/pg_xlog/xlogtemp.6178": No space > left on device > > Pgsql was unresponsive, so we sent SIGINT, then SIGQUIT, both > with no response, so finally, sent SIGKILL, then used ipcrm to > clear the shm and semaphores. Then restarted, but upon > restart, it's stuck in "FATAL: the database system is > starting up" mode. > > Any clues??? > > Thanks, > Ed > > ---(end of > broadcast)--- TIP 3: Have you checked > our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Seg fault in postgres 7.4.7?
On Mon, Jul 18, 2005 at 09:21:31AM -0700, Akash Garg wrote: > > I looked in the data directory of postgres -- where else should I look for it? Did you look everywhere beneath the data directory, or just in the data directory itself? Released versions of PostgreSQL typically dump core in $PGDATA/base/XXX, where XXX is the database OID. But on some systems you can configure where core dumps go and what name they have, so you might have to check your local settings. If you're sure you didn't get a core dump then you might have a resource limit like "coredumpsize" that prevents them from happening. If so, then it might be useful to adjust that limit so you can get core dumps. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] help: production db stuck in startup mode
On Wednesday July 20 2005 1:19 pm, Ed L. wrote: > On Wednesday July 20 2005 1:13 pm, Ed L. wrote: > > I have a 7.4.6 db running on itanium hpux 11.23. It appears > > a vacuum ran out of disk space: > > 2005-07-20 13:47:43 [6161] ERROR: DBI::do failed for SQL > > [VACUUM VERBOSE] to database clincomm: DBI error: PANIC: > > could not write to file > > "/users/postgresql-7.4.6/data/pg_xlog/xlogtemp.6178": No > > space left on device > > > > Pgsql was unresponsive, so we sent SIGINT, then SIGQUIT, > > both with no response, so finally, sent SIGKILL, then used > > ipcrm to clear the shm and semaphores. Then we (cleared > > diskspace and) restarted, but > > upon restart, it's stuck in "FATAL: the database system is > > starting up" mode. The system finally restarted, but it went through auto-recovery and took some number of minutes to completely come up. Autovac was in the process of doing a db-wide vacuum to prevent xid wraparound. We had 1-2gb of diskspace free. What kind of diskspace usage surge might that create for a 20gb+ db? Ed ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] IN subquery not using a hash
For the following query, postgres is running the IN subquery over and over again (once for each row scanned in the parent table.) I would have expected it to run the whole query once and create a hash which would then be probed once for every row scanned in the parent table. I assumed that it was not doing so because it thought that the resulting hash table would exceed sort_mem, but setting sort_mem to half a gigabyte did not make any difference. Is there some other reason that the optimizer is not using a hash table? 563 pages * 8 KB per page * 296 tuples fetched / 52085 tuples in the whole table = 25 KB. Shouldn't the optimizer think that the subquery will only fetch 25 KB worth of rows? (Later, I realized that the official name for "sort_mem" is now work_mem. Now, does this mean that my set sort_mem = 50 did not do anything?) Regards, Paul Tillotson omnis=> select version(); version - PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 20040412 (Red Hat Linux 3.3.3-7) (1 row) omnis=> show sort_mem; 2 omnis=> set sort_mem = 50; SET omnis=> explain analyze select 1 from parents where lname = 'SMITH' OR parentid IN (SELECT parentid FROM child where childlast = 'SMITH'); QUERY PLAN - Seq Scan on parents (cost=357.78..226649.83 rows=59785 width=0) (actual time=127.644..104568.639 rows=855 loops=1) Filter: (((lname)::text = 'SMITH'::text) OR (subplan)) SubPlan -> Materialize (cost=357.78..360.74 rows=296 width=4) (actual time=0.001..0.257 rows=313 loops=117943) -> Index Scan using child_childlast_index on child (cost=0.00..357.48 rows=296 width=4) (actual time=0.073..1.325 rows=313 loops=1) Index Cond: ((childlast)::text = 'SMITH'::text) Total runtime: 104569.800 ms (7 rows) omnis=> select reltuples, relpages from pg_class where relname = 'child'; reltuples | relpages ---+-- 52085 | 563 (1 row) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] IN subquery not using a hash
Paul Tillotson <[EMAIL PROTECTED]> writes: > For the following query, postgres is running the IN subquery over and > over again (once for each row scanned in the parent table.) > I would have expected it to run the whole query once and create a hash > which would then be probed once for every row scanned in the parent > table. I assumed that it was not doing so because it thought that the > resulting hash table would exceed sort_mem, Hardly likely, considering it's estimating only 296 rows in the subquery output. My bet is that you've chosen a datatype whose comparisons are not hashable (like char(n)). What is the datatype of parentid in these tables, anyway? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] IN subquery not using a hash
Tom Lane wrote: Paul Tillotson <[EMAIL PROTECTED]> writes: For the following query, postgres is running the IN subquery over and over again (once for each row scanned in the parent table.) I would have expected it to run the whole query once and create a hash which would then be probed once for every row scanned in the parent table. I assumed that it was not doing so because it thought that the resulting hash table would exceed sort_mem, Hardly likely, considering it's estimating only 296 rows in the subquery output. My bet is that you've chosen a datatype whose comparisons are not hashable (like char(n)). What is the datatype of parentid in these tables, anyway? regards, tom lane I don't have access to the machine now, but my memory is that parent.parentid is numeric(10,2) and child.parentid is int.If child.parentid is int and parent.parentid is numeric, would that cause this? (Not good database design, I know.) I am 100% certain that neither of these are char(n), and 99% certain that they are either numeric or int. Paul Tillotson ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] IN subquery not using a hash
On Wed, Jul 20, 2005 at 08:11:46PM -0400, Paul Tillotson wrote: > Tom Lane wrote: > >Hardly likely, considering it's estimating only 296 rows in the subquery > >output. My bet is that you've chosen a datatype whose comparisons are > >not hashable (like char(n)). What is the datatype of parentid in these > >tables, anyway? > > > I don't have access to the machine now, but my memory is that > parent.parentid is numeric(10,2) and child.parentid is int. Numeric isn't hashable, but I don't know enough about the internals to say why. Tom? Why different types, and why numeric for one of them? Why not integer for both? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] IN subquery not using a hash
Paul Tillotson <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> Hardly likely, considering it's estimating only 296 rows in the subquery >> output. My bet is that you've chosen a datatype whose comparisons are >> not hashable (like char(n)). What is the datatype of parentid in these >> tables, anyway? >> > I don't have access to the machine now, but my memory is that > parent.parentid is numeric(10,2) and child.parentid is int. Offhand I don't believe there are any hashable crosstype comparisons. In this case the int is probably getting promoted to numeric, but I think numeric comparison isn't hashable either (because for example '0.0' = '0.000' but the internal representations are different). regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] No user being created in os X
Hi, Having an issue with 10.4.2 at the moment when i initialize the database no user is being created. I have setup Postgresql 8.0.3 on 10.3 without any issue, has anyone come accross this problem before? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Cannot start DB system
All, I have postgreSQL 8.0 on winXP. This morning at 7:59am, the db system could not start. I looked at the pg_log. Started from yesterday 1:30pm, all log files have this line: database system was not properly shut down; automatic recovery in progress Then, I restarted the PC several times, I did not see any new log in pg_log folder. Thus, I think the db system might be crashed or cannot funtion at all. I tried two things: 1. Go to command prompt, typed net start -pgsql8.0, I saw : The PostgreSQL Database Server 8.0 service is starting. The PostgreSQL Database Server 8.0 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. 2. Go to command prompt, typed psql -h 127.0.0.1 Postgres template1, I saw : psql: could not connect to server: Connection refused (0x274D/10061) Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? If the db system is crashed, how do I fix it ? Any help is really appreciated. Thanks ! Tan Chen Yee ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [SQL] dynamically loaded functions
On Wed, 2005-07-13 at 10:24 -0700, TJ O'Donnell wrote: > > > > It sounds like you need to link gnova.so against the other shared > > objects so the runtime linker can find them. For examples, see the > > Makefiles used by contributed modules like dblink, xml2, and a few > > others that link against external libraries. > > > That approach is working, but only after much troubles. > I have several 3rd party libraries, some of which call each other. > It appears I need to carefully order the libraries in the > link step, or some needed functions do not get included in the > final .so. > > This problem used to crop up all the time way back 20 years ago > with linkers. I thought all this name resolution stuff was all worked > out with modern linkers. I'm "linking" with (linux redhat) > gcc -shared -o my.so my.o my2.o their.a their2.a their3.a > When function x() in their2.a calls something in their.a > (or is it the other way around?) > I get an error from postmaster that my.so cannot be loaded because > function x cannot be found. > If I reverse their.a their2.a in the link command, all is well. > Note: I never use, nor even knew about the exitence of function x() - "they" > do. > > Any help on how to make this more pain-free? > > TJ > I don't know much about pgsql but I do know that when linking if module1 needs something in module2 and module2 needs something in module1 then you can put the the same module in the library link (i.e. '-l') list more than once. > > Michael Fuhr wrote: > > On Mon, Jul 11, 2005 at 08:16:17PM -0700, TJ O'Donnell wrote: > > > >>CREATE or REPLACE FUNCTION cansmiles(varchar) RETURNS varchar > >> AS 'gnova', 'oe_cansmiles' LANGUAGE 'c' IMMUTABLE STRICT; > >>requires preloading of oe_chem.so to work. > >> > >>Is there any way I can associate oe_cansmiles with 2 .so's without > >>preloading? > > > > > > It sounds like you need to link gnova.so against the other shared > > objects so the runtime linker can find them. For examples, see the > > Makefiles used by contributed modules like dblink, xml2, and a few > > others that link against external libraries. > > > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [HACKERS] No user being created during initdb for OS X
I am doing it right /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data Thomas F. O'Connell wrote: Do you already have a postgres user on the system? And do you mean that initdb is not creating a postgres user in the database? Presumably, if run as the user that will own the server process, it should create that user in the database as well. http://www.postgresql.org/docs/8.0/static/app-initdb.html -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Strategic Open Source: Open Your i™ http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005 On Jul 20, 2005, at 8:35 PM, Jamie Deppeler wrote: Hi, Having an issue with 10.4.2 at the moment when i initialize the database no user is being created. I have setup Postgresql 8.0.3 on 10.3 without any issue, has anyone come accross this problem before? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org