Re: [GENERAL] pg 8.1.3 on AIX
Tom + Seneca, Thank you for your sage advice - hopefully I will have enough Linux + compilation experience to bridge the AIX gap - no doubt if I run into big problems, you'll hear about it soon! :) Cheers, Gavin, ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Updating database structure
Quoting Janning Vygen [EMAIL PROTECTED]: Am Mittwoch, 22. März 2006 20:40 schrieb Luuk Jansen: I have a problem with finding a way to update a database structure. This might be a very simple problem, just cannot find the info. I am looking at updating the structure of my database. I put an application on my production server some time ago and further developed the application till a point I have to merge the changes. How can I update the structure on the production server to reflect the database on my test machine in an easy way with preservation of the data on the production server. There are no major changes in the fields types etc., mainly additions/deletions of fields in the tables. usually you write some update SQL scripts which can be deployed to the production database. you have a base schema in sql/schema.sql and further changes are placed inside sql/update_0001.sql with content like BEGIN; ALTER TABLE ... COMMIT; now i use a script called update.pl and a version table inside my database to register which update scripts are already deployed. The script checks which changes have to be deployed and installs them to the production database. if you dont have those update SQL scripts it can become quite difficult. You have to compare original schema and current schema. To get a schema look at pg_dump option '-s'. Compare those schemas and write your update SQL scripts. kind regards janning Thanks Janning, I have something like that in mind, but is there an easy way to create that script? Is there an easy way to let postgres keep track of the changes? I currently use phpPgAdmin to make changes to the database, so it would be very handy if Postgres could add a change made to a lable somewhere, after which I gather all the rows with changes and put them in a SQL query. Can a rule be made for that or so, or do you track it manually when you say: and further changes are placed inside sql/update_0001.sql with content like BEGIN; ALTER TABLE ... COMMIT; Thanks, Luuk ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Updating database structure
On 23.03.2006, at 9:50 Uhr, [EMAIL PROTECTED] wrote: I currently use phpPgAdmin to make changes to the database, so it would be very handy if Postgres could add a change made to a lable somewhere, after which I gather all the rows with changes and put them in a SQL query. I mostly use the command line tools or a graphical tool to make my own sql calls for every schema change I make on the development database. Then all these changes come to a script in my application (it's not actually a script but similar). The database has a version tag in one table. The application has a bunch of these scripts to convert from one version to another. It knows the sequence in how they have to be applied by looking at the version numbers. So, let's say, the production database is on version 1.1 and you have made several test versions connecting to a test server, the test/dev db is on version 1.4 and now I want to put my new application online. I simply shut down all running instances of my app, replace the application binaries, start the application again (one single instance for now), the app checks the DB version, sees that changes have to be made and applies all scripts necessary for converting the db from version 1.1 to 1.4 automatically. When this is done, I start the other instances and everything is fine. For the kids: Don't do this at home without a fresh backup and without a lot of testing of this process! cug -- PharmaLine, Essen, GERMANY Software and Database Development smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Some pgbench results
I was doing some load testing on a server, and decided to test it with different file systems to see how it reacts to load/speed. I tested xfs, jfs and ext3. The machine runs FC4 with the latest 2.6.15 kernel from Fedora. Hardware: Dual Opteron 246, 4GB RAM, Adaptec 2230 with battery backup, 2 10K SCSI disks in RAID1 for OS and WAL (with it's own partiton on ext3), 6 10K scsi disks in RAID10 (RAID1 in hw, RAID0 on top of that in sw). Postgres config tweaked as per the performance guide. Initialized the data with: pgbench -i -s 100 Test runs: pgbench -s 100 -t 1 -c 20 I did 20 runs, removed the first 3 runs from each sample to account for stabilization. Here are the results in tps without connection establishing: FS: JFS XFS EXT3 Avg: 462 425 319 Stdev: 10474 106 Intererstingly, the first 3 samples I removed had a MUCH higher tps count. Up to 900+. Bye, Guy. -- Family management on rails: http://www.famundo.com - coming soon! My develpment related blog: http://devblog.famundo.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] partial resultset in java
Re: [GENERAL] partial resultset in javaI have not install pgsql's jdbc. But I think the following link may help. Regards, William ZHANG ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] partial resultset in java
Sorry. Forgot the link: http://www.onjava.com/pub/a/onjava/2004/06/16/dbcancel.html?page=2 William ZHANG [EMAIL PROTECTED] Re: [GENERAL] partial resultset in javaI have not install pgsql's jdbc. But I think the following link may help. Regards, William ZHANG ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Some pgbench results
Just Someone wrote: 2 10K SCSI disks in RAID1 for OS and WAL (with it's own partiton on ext3), You'll want the WAL on its own spindle. IIRC a separate partition on a shared disc won't give you much benefit. The idea is to keep the disc's head from moving away for other tasks. Or so they say. regards, bkw ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Updating database structure
Guido Neitzer wrote: I mostly use the command line tools or a graphical tool to make my own sql calls for every schema change I make on the development database. Then all these changes come to a script in my application (it's not actually a script but similar). The database has a version tag in one table. The application has a bunch of these scripts to convert from one version to another. It knows the sequence in how they have to be applied by looking at the version numbers. Looks like it could be useful to make the database 'log' the DDL statements and having the ability to export those to a script starting from a certain version. It would probably still need user intervention, as sometimes experiments and mistakes require fixing things - which would result in bogus or non-optimal DDL statements in the log. There's also sometimes the need to update data between statements, for example when adding a new NOT NULL column to a table. Not sure what to do in that case. Automating that would require some server side 'intelligence'... Regards, -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Logging of sql statements?
Is it possible to log the actual statement that the server runs? At the moment, the statement that is logged, is logged using ? for parameters; I would like to log the statement after the parameters have been substituted, isn't this possible in 8.1.x? (used to work in 7.x.x) Regards, BTJ -- --- Bjørn T Johansen [EMAIL PROTECTED] --- Someone wrote: I understand that if you play a Windows CD backwards you hear strange Satanic messages To which someone replied: It's even worse than that; play it forwards and it installs Windows --- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Old pg_xlog files
Thanks. I'll see if I can get a scheduled maintenance check on this.The disks seem to be good as they are a 0+1 RAID and all internal tests show them to be in good health along with the controller. Would memory be a good suspect? On 3/22/06, Tom Lane [EMAIL PROTECTED] wrote: Tass Chapman [EMAIL PROTECTED] writes: Any suggestions on what I can look at to see why the checkpoint was so far outstanding and why it hung on in a Zombie state when we tried a soft boot? If it was in a funny process state, I'd guess that there was somehardware problem that caused a disk write request to hang up.Youcan get stuck in uninterruptible disk wait for quite a long time ifthe disk doesn't respond. regards, tom lane
Re: [GENERAL] Logging of sql statements?
am 23.03.2006, um 14:07:11 +0100 mailte Bjørn T Johansen folgendes: Is it possible to log the actual statement that the server runs? At the moment, the statement that is logged, is logged using ? for parameters; I would like to log the statement after the parameters have been substituted, isn't this possible in 8.1.x? (used to work in 7.x.x) Yes, this is possible. Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas -- Andreas Kretschmer(Kontakt: siehe Header) Heynitz: 035242/47215, D1: 0160/7141639 GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] delete my email from the list
---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] ERROR: could not convert UTF8 character to ISO8859-1
I have seen several people with this same error. I get mine when trying to backup using pg_dump, causing my backups to fail. I am using PostgreSQL 8.1 on Windows XP.The databases are in UTF8. I get the same error from the command line and using pgadmin3. Has anyone found a solution for this? Looking around online, I see that this supposedly codes for a lowercase a with a circumflex (which I don't have in tha table thet triggeres the error), but that this may be the first part of a three-byte sequence that codes for something else. Does anyone know of a way to pinpoint the character in the database? I am not sure how to search for a raw unicode character through psql or pgadmin... Thanks for any help! JP ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ERROR: could not convert UTF8 character to ISO8859-1
To answer my own mail, I shut down the 8.1 database, and started up the 8.0 PostgreSQL install that is still on my computer, and the backups work fine. I do get the message about how 8.1 handles sorting of unicode characters more correctly, but looking at my backups, I can see that the full back up failed about the time that I switched to the newer version. Is there a fix for this? Is this a bug? Is there a workaround? Thanks for any help. JP JP Glutting wrote: I have seen several people with this same error. I get mine when trying to backup using pg_dump, causing my backups to fail. I am using PostgreSQL 8.1 on Windows XP.The databases are in UTF8. I get the same error from the command line and using pgadmin3. Has anyone found a solution for this? Looking around online, I see that this supposedly codes for a lowercase a with a circumflex (which I don't have in tha table thet triggeres the error), but that this may be the first part of a three-byte sequence that codes for something else. Does anyone know of a way to pinpoint the character in the database? I am not sure how to search for a raw unicode character through psql or pgadmin... Thanks for any help! JP ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Oficina de Qualitat Tel: 973 703 103 Universitat de Lleida Fax: 973 702 036 Pl. Víctor Siurana, 1 25003 Lleida ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] COPY command documentation
I have driven myself to distraction for the last 30 minutes trying to get COPY to work on Windows XP. The Unix style c:/afolder/afile instead of c:\afolder\afile was a desperation attempt. I had tried all sorts of double slashes \\ putting the whole path in quotes basically all sorts of foolishness. I would suggest the there should be a Windows example(s) in the documents as well as a *NIX style one(s) where necessary. Did I miss this somewhere or should I put a comment on the doc or what can I do to help the next Windows user. Oisin P.S. I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html Now happily using COPY, Oisin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Interval regression test failing with -march=pentium4
I'm compiling postgresql 8.1.3 with these gcc flags: -march=pentium4 -O3 The compilation ends correctly but the interval regression test fails. This is the diff file: = *** ./expected/interval.out Thu Mar 23 15:38:42 2006 --- ./results/interval.out Thu Mar 23 16:16:06 2006 *** *** 218,224 select avg(f1) from interval_tbl; avg - ! @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs (1 row) -- test long interval input --- 218,224 select avg(f1) from interval_tbl; avg - ! @ 4 years 1 mon 10 days 4 hours 18 mins 23 secs (1 row) -- test long interval input == If i compile the -march=i586 the regression test reports no problems. Any tips? Thank you, Doct. Eng. Denis Gasparin --- Edistar Srl ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ERROR: could not convert UTF8 character to ISO8859-1
More information. I backed up a database (one that failed to backup in 8.1) from the old 8.0 system, and re-imported it into the 8.0 system without any problem. Then I imported it into the 8.1 system without errors, and tried to do a backup again. It failed. So there seems to be either a flaw in the 8.1 system, or perhaps a problem with the way 8.0 backs up it's data which only shows up when imported into the newer system. Right now I need the data I have in the 81 system, and I can't back it up except by shutting down Postgres and copying the whole /data directory. I think I am going to try to go back to 8.0. Is it a very bad idea to shut down Postgres and try to replace the data directory for 8.0 withe the data directory for 8.1? Any help or suggestions appreciated. JP JP Glutting wrote: To answer my own mail, I shut down the 8.1 database, and started up the 8.0 PostgreSQL install that is still on my computer, and the backups work fine. I do get the message about how 8.1 handles sorting of unicode characters more correctly, but looking at my backups, I can see that the full back up failed about the time that I switched to the newer version. Is there a fix for this? Is this a bug? Is there a workaround? Thanks for any help. JP JP Glutting wrote: I have seen several people with this same error. I get mine when trying to backup using pg_dump, causing my backups to fail. I am using PostgreSQL 8.1 on Windows XP.The databases are in UTF8. I get the same error from the command line and using pgadmin3. Has anyone found a solution for this? Looking around online, I see that this supposedly codes for a lowercase a with a circumflex (which I don't have in tha table thet triggeres the error), but that this may be the first part of a three-byte sequence that codes for something else. Does anyone know of a way to pinpoint the character in the database? I am not sure how to search for a raw unicode character through psql or pgadmin... Thanks for any help! JP ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Oficina de Qualitat Tel: 973 703 103 Universitat de Lleida Fax: 973 702 036 Pl. Víctor Siurana, 1 25003 Lleida ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Logging of sql statements?
On Thu, 23 Mar 2006 14:25:52 +0100 A. Kretschmer [EMAIL PROTECTED] wrote: am 23.03.2006, um 14:07:11 +0100 mailte Bjørn T Johansen folgendes: Is it possible to log the actual statement that the server runs? At the moment, the statement that is logged, is logged using ? for parameters; I would like to log the statement after the parameters have been substituted, isn't this possible in 8.1.x? (used to work in 7.x.x) Yes, this is possible. Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas This is what I am already using and it doesn't work... BTJ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Logging of sql statements?
On Thu, 2006-03-23 at 10:09, Bjørn T Johansen wrote: On Thu, 23 Mar 2006 14:25:52 +0100 A. Kretschmer [EMAIL PROTECTED] wrote: am 23.03.2006, um 14:07:11 +0100 mailte Bjørn T Johansen folgendes: Is it possible to log the actual statement that the server runs? At the moment, the statement that is logged, is logged using ? for parameters; I would like to log the statement after the parameters have been substituted, isn't this possible in 8.1.x? (used to work in 7.x.x) Yes, this is possible. Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas This is what I am already using and it doesn't work... I was under the impression that back in the 7.x.y days, prepared statements got handled differently and could be logged with their parameters, but in 8.x.y, the way prepared statements are handled made this impossible. It's a memory from at least 6 months ago, so I could be messing it up a bit. Anyone with a better memory, please chime in. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Logging of sql statements?
Bjørn T Johansen [EMAIL PROTECTED] schrieb: Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas This is what I am already using and it doesn't work... It works for me ;-) test=# prepare my_foo(int) as select * from foo where id = $1; PREPARE test=# test=# execute my_foo(1); id | x + 1 | 10 (1 row) In my log: LOG: statement: prepare my_foo(int) as select * from foo where id = $1; LOG: statement: execute my_foo(1); [client PREPARE: prepare my_foo(int) as select * from foo where id = $1;] test=# select version(); version -- PostgreSQL 8.1.2 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.3 20060104 (prerelease) (Debian 4.0.2-6) (1 row) HTH, Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) If I was god, I would recompile penguin with --enable-fly.(unknow) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Updating database structure
Hello, I accidentaly came across this post. I didn't follow it so I don't know whether my posting is to the topic or not. I've just uploaded project at SourceForge.Net on topic of PostgreSQL database schema upgrades because I needed to find out differences between current and new schemas. The project is called apgdiff (Another PostgreSQL Diff Tool) and can be found at apgdiff.sourceforge.net. It is still in beta but might be useful. Sorry if this post is not to the topic. Miroslav Šulc Alban Hertroys napsal(a): Guido Neitzer wrote: I mostly use the command line tools or a graphical tool to make my own sql calls for every schema change I make on the development database. Then all these changes come to a script in my application (it's not actually a script but similar). The database has a version tag in one table. The application has a bunch of these scripts to convert from one version to another. It knows the sequence in how they have to be applied by looking at the version numbers. Looks like it could be useful to make the database 'log' the DDL statements and having the ability to export those to a script starting from a certain version. It would probably still need user intervention, as sometimes experiments and mistakes require fixing things - which would result in bogus or non-optimal DDL statements in the log. There's also sometimes the need to update data between statements, for example when adding a new NOT NULL column to a table. Not sure what to do in that case. Automating that would require some server side 'intelligence'... Regards, begin:vcard fn;quoted-printable:Miroslav =C5=A0ulc n;quoted-printable:=C5=A0ulc;Miroslav org:StartNet s.r.o. adr;quoted-printable;quoted-printable:;;Schodov=C3=A1 309/10;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika email;internet:[EMAIL PROTECTED] tel;cell:+420 603 711 413 x-mozilla-html:TRUE url:http://www.startnet.cz version:2.1 end:vcard ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] SELECT beer FROM pub WHERE location = 'Europe'
I'm in Brussels until Wednesday; should anyone be interested in grabbing a beer or 3 somewhere drop me an email. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SELECT beer FROM pub WHERE location = 'Europe'
am 23.03.2006, um 10:37:26 -0600 mailte Jim C. Nasby folgendes: I'm in Brussels until Wednesday; should anyone be interested in grabbing a beer or 3 somewhere drop me an email. Nice idea ;-), but too far for me. Regards, Andreas -- Andreas Kretschmer(Kontakt: siehe Header) Heynitz: 035242/47215, D1: 0160/7141639 GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
Jimbo1 [EMAIL PROTECTED] writes: Not yet asked them, but will pop the question over the next week. ;o) Careful about popping the question... While good marriages have come from that, so also have been some bad ones :-). -- (reverse (concatenate 'string gro.gultn @ enworbbc)) http://www.ntlug.org/~cbbrowne/lsf.html One good turn gets most of the blankets. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] partial resultset in java
Include a "Limit" in your query. Michael Schmidt
Re: [GENERAL] How to release locks
On Mar 22, 2006, at 7:14 PM, Andrus wrote: Do you mean that the statement hadn't been fully transmitted yet? Yes. Sending 2 MB takes most of time so client is terminated in this stage. If so, the backend would have just been waiting for the rest of the statement to arrive. Perhaps you're looking at some sort of network bug that caused the backend not to be told that the connection was lost. If so why pgAdmin Cancel command does not terminate the process ? If process is waiting for data, Cancel signal sent through pgAdmin should terminate process immediately. If pgAdmin's cancel is just dropping the connection, the server might take some time to notice it, especially if it's in the process of running a query and doesn't have reason to talk to pgAdmin. -- Jim C. Nasby, Database Architect[EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] how to update structural data changes between PostgreSQL
On Mar 22, 2006, at 9:21 PM, Jeff Amiel wrote: We have a fairly 'good' process at our shop that we follow that works for us First we do a schema comparison between our prod and devl/test environments using the EMS PostgreSQL database comparer tool... Another way to handle this is to keep your database code in a version control system and tag each release. That way you can generate a diff between two releases to see what's changed. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Enforcing serial uniqueness?
On Mar 23, 2006, at 3:33 AM, Steven Brown wrote: -- On INSERT, fill id from the sequence - creator has UPDATE permission. -- Block attempts to force the id. CREATE OR REPLACE FUNCTION foo_id_insert_procedure() RETURNS trigger SECURITY DEFINER AS ' BEGIN IF NEW.id != 0 THEN RAISE EXCEPTION ''Setting id to a non-default is not allowed''; ELSE NEW.id := nextval(''foo_id_seq''); BTW, with some clever use of TG_RELNAME you could probably make that function generic, so that you could use it with any table; ie: NEW.id := nextval( TG_RELNAME || ''_id_seq''); Same holds true for the update function. You might also want to create a function that does all the legwork of defining the sequence and triggers for you. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Some pgbench results
On Mar 23, 2006, at 11:32 AM, Bernhard Weisshuhn wrote: Just Someone wrote: 2 10K SCSI disks in RAID1 for OS and WAL (with it's own partiton on ext3), You'll want the WAL on its own spindle. IIRC a separate partition on a shared disc won't give you much benefit. The idea is to keep the disc's head from moving away for other tasks. Or so they say. Actually, the OS partitions are normally quiet enough that it won't make a huge difference, unless you're really hammering the database all the time. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Some pgbench results
On Mar 23, 2006, at 11:01 AM, Just Someone wrote: I was doing some load testing on a server, and decided to test it with different file systems to see how it reacts to load/speed. I tested xfs, jfs and ext3. The machine runs FC4 with the latest 2.6.15 kernel from Fedora. You should also try testing ext3 with data=writeback, on both partitions. People have found it makes a big difference in performance. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] COPY command documentation
On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote: I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html No, comments don't roll forward. Generally someone will try and pull out comments that should be included in the docs and add them, but this comment barely touches on the backslash issue, so that's probably why it wasn't brought in. Care to submit a documentation patch? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Updating database structure
On Mar 23, 2006, at 9:50 AM, [EMAIL PROTECTED] wrote: I currently use phpPgAdmin to make changes to the database, so it would be very handy if Postgres could add a change made to a lable somewhere, after which I gather all the rows with changes and put them in a SQL query. My suggestion: don't do that. What I do is keep the files used to create a database from scratch under version control (such as subversion). Then, depending on how active you development is, you can either commit scripts to make schema changes every time they happen, or you can do a diff between two releases of your application (you'll want to tag each release) and see what's changed. By changing things on-the-fly, you end up with no history of what's changed, when it changed, and why it was changed (commit logs). You may not thing having such information is important if you're the only one working on something, but trust me, having that info available has saved my bacon many times. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
RES: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
Hi James : I am mysql user for 5 years. An a this time i am trying to go out from mysql. What i am living mysql? Becouse my php bases systems require some features that mysql is inplementing in the vercion 5.0, like store procedures and functions, trrigers, transaction, views, and some others features. Dou you now how to lock a counter table to users have no access to this table mysql ? Table two field, identerprise and counter Mysql, to lock write, you have to lock entire table for write. LOCK TABLES co_nro_notas WRITE. SELECT counter FROM co_nro_notas WHERE identerprise = 1 Update co_nro_notas SET counter = counter + 1 UNLOCK TABLES This is a problem when do you have two or more enterprise runing in the same database, an each with more than 10 users. In postgresql each enterprise have your own record, and eachone can lock his record. You use a SELECT with a FOR UPDATE. Try postgresql, is better than mysql. I am dice to migrate all my systems to postgresql. PD: I try postgresql in 2002, but a this time postgresql does not have windows vercion, this is the reason for use mysql, have not other reason. Alejandro M.S. Porto Alegre Brasil ---Mensagem original- --De: [EMAIL PROTECTED] --[mailto:[EMAIL PROTECTED] Em nome de Jimbo1 --Enviada em: quarta-feira, 22 de março de 2006 08:06 --Para: pgsql-general@postgresql.org --Assunto: [GENERAL] Advantages of PostgreSQL over MySQL 5.0 -- -- --Hello there, -- --I'm a freelance Oracle Developer by trade (can almost hear --the boos now ;o)), and am looking into developing my own --Snowboarding-related website over the next few years. --Anyway, I'm making some decisions now about the site --architecture, and the database I'm going to need is --obviously included. If my site works out, I'm expecting --reasonably heavy traffic, so want a database that I'm --confident can cope with it. -- --It is out of the question for me to use Oracle, although I am a --(biased) 'fan' of that RDBMS. I definitely need to go for a --cheaper route, and to that end I'm looking at either MySQL --or PostgreSQL. -- --Regarding MySQL, I've been put off by Oracle's recent --purchase of InnoDB and realise this could badly impact the --latest version of the MySQL database. I can almost hear --Larry Ellison's laughter from here (allegedly)! I've also --been put off by the heavy marketing propaganda on the MySQL website. -- --Recently, I've been taking a look at PostgreSQL, and am very --impressed by what I've read, although I've not yet --investigated the database first-hand. To cut to the chase, I --would be interested in anybody's feedback on the advantages --that PostgreSQL has over MySQL. -- --Also, I've recently read the Inside MySQL 5.0 (marketing --propaganda) document, and it makes the following claim: -- --With MySQL, customers across all industries are finding --they can easily handle nearly every type of database --workload, with performance and scalability outpacing every --other open source rival. As Los Alamos lab (who uses MySQL --to manage their terabyte data warehouse) said, We chose --MySQL over PostgreSQL primarily because it scales better and --has embedded replication... -- --If any PostgreSQL devotees on this group can comment on the --above and its accuracy/inaccuracy, I'd really appreciate it. -- --Thanks in advance. -- --James -- -- -(end of --broadcast)--- --TIP 6: explain analyze is your friend -- -- --No virus found in this incoming message. --Checked by AVG Free Edition. --Version: 7.1.385 / Virus Database: 268.2.6/287 - Release --Date: 21/3/2006 -- -- --No virus found in this incoming message. --Checked by AVG Free Edition. --Version: 7.1.385 / Virus Database: 268.2.6/287 - Release --Date: 21/3/2006 -- -- -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.385 / Virus Database: 268.2.6/287 - Release Date: 21/3/2006 -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.385 / Virus Database: 268.2.6/287 - Release Date: 21/3/2006 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
As I'm sure you've asked the same question of the MySQL folks, can you tell us what they've said about us? I guess it's not just idle curiosity (90% though), but it might give us some pointers about how to improve either our marketing, implementation or both. Not yet asked them, but will pop the question over the next week. ;o) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
Hello there, I'm a freelance Oracle Developer by trade (can almost hear the boos now ;o)), and am looking into developing my own Snowboarding-related website over the next few years. Anyway, I'm making some decisions now about the site architecture, and the database I'm going to need is obviously included. If my site works out, I'm expecting reasonably heavy traffic, so want a database that I'm confident can cope with it. It is out of the question for me to use Oracle, although I am a (biased) 'fan' of that RDBMS. I definitely need to go for a cheaper route, and to that end I'm looking at either MySQL or PostgreSQL. One aspect you might consider is that Postgres has an OGC SFS compliant extension, PostGIS, wich is comparable to (or better than :-) Oracle Spatial. MySQL is in the process of developing such a capability, but the current implementation is incomplete and will frequently return wrong answers. Not really buggy (the reasons are clearly documented), but the stage of development is such that I don't think it should have been released as other than alpha software. The ease of integrating web map server applications using data from Postgres/PostGIS tables may be useful for such a site, where maps, road information, photos accessed by clicking on a map, etc might be useful. Applications such as mapserver can treat PostGIS enabled Postgres tables as map layers, and allow spatial queries etc to be carried out as well as normal non-spatial queries. Cheers, Brent Wood ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote: Now, I shouldn't be able to insert anything in b that's not referencing an entry in a. and I used innodb tables. and I used ansi SQL, and I got no errors. So how come my data's incoherent three seconds after creating the tables the way the spec says should work? Simple. MySQL only implements foreign keys if you do them this way: Good lord, is that still true in 5.0?? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
On Thu, 2006-03-23 at 12:17, Jim Nasby wrote: On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote: Now, I shouldn't be able to insert anything in b that's not referencing an entry in a. and I used innodb tables. and I used ansi SQL, and I got no errors. So how come my data's incoherent three seconds after creating the tables the way the spec says should work? Simple. MySQL only implements foreign keys if you do them this way: Yep. I filed the bug report on it. http://bugs.mysql.com/bug.php?id=13301 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to release locks
I find that my killed connection server process process disappear in Status after a long time. If pgAdmin's cancel is just dropping the connection, the server might take some time to notice it, especially if it's in the process of running a query and doesn't have reason to talk to pgAdmin. I think pgAdmin sends cancel command through postgres server main process. So server receives it immediately and sends it immediately to selected child process. I don't have an idea what pgadmin cancel command exactly does. After executing it processes remain visible in Status pane always. I set log_statement = 'all' in postgres.conf file but cancel command does not wrote any entry to log file. There is also Terminate button in Status panel but it is always disabled. I havent found way to enable it. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Hi,
Hi Tom, I decided to test your theory that I had an old version of Postgres on my system when I installed version 8.1.3. By the way, the Linux install we a fresh one to start with. So this morning I first did a search on my system for all pg_dump files, and wrote the locations down. I them removed the entire file structure of postgresql-8.1.3 from my system. I then did a system search for pg_dump again to confirm that all files by the name of pg_dump were removed, which they were. I then re-installed PostgreSQL version 8.1.3. After completing, I did a system search for the pg_dump again and found them in the locations I expected. I them recreated my database and tested the pg_dump. I got the same error. Version mismatch with the same version numbers as before. I think that an old version of pg_dump is bundled up with the install of version 8.1.3. How can I get the correct version of pg_dump? Or any of the other files that are not the correct version? Brian Tom Lane [EMAIL PROTECTED] 3/22/2006 2:38 PM Brian Kitzberger [EMAIL PROTECTED] writes: When I did I get pg_dump: server version: 8.1.3; pg_dump version: 7.4.8 Apparently you already had a 7.4.8 postgres installed on your machine. Most versions of Linux do have PG in them. You probably want to remove the 7.4.8 files to avoid confusion like this. The dump also failed with this error: Error message from server: ERROR: column datpath does not exist That's because that version of pg_dump is too old to understand the 8.1 server's catalog layout. There's a good reason why it refused to dump from a newer server by default; it knows it probably ain't gonna work. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Hi,
Brian Kitzberger wrote: Hi Tom, I decided to test your theory that I had an old version of Postgres on my system when I installed version 8.1.3. By the way, the Linux install we a fresh one to start with. So this morning I first did a search on my system for all pg_dump files, and wrote the locations down. I them removed the entire file structure of postgresql-8.1.3 from my system. I then did a system search for pg_dump again to confirm that all files by the name of pg_dump were removed, which they were. I then re-installed PostgreSQL version 8.1.3. After completing, I did a system search for the pg_dump again and found them in the locations I expected. I them recreated my database and tested the pg_dump. I got the same error. Version mismatch with the same version numbers as before. I think that an old version of pg_dump is bundled up with the install of version 8.1.3. How can I get the correct version of pg_dump? Or any of the other files that are not the correct version? What is the result of the following: pg_dump --version pg_dumpall --version psql --version which pg_dump which pg_dumpall which psql What method did you use to search for files? Cheers, Steve ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] version problem with pg_dump
Hi Steve, pg_dump --versionreturned 7.4.8 pg_dumpall --version returned 7.4.8 psql *version returned 7.4.8 which pg_dump returned /usr/bin/pg_dump which pg_dumpall returned /usr/bin/pg_dump which psql returned /usr/bin/psql To find the file I used from the root find . -name pg_dump I have never installed any other version than 8.1.3. Brian Steve Crawford [EMAIL PROTECTED] 3/23/2006 11:29 AM Brian Kitzberger wrote: Hi Tom, I decided to test your theory that I had an old version of Postgres on my system when I installed version 8.1.3. By the way, the Linux install we a fresh one to start with. So this morning I first did a search on my system for all pg_dump files, and wrote the locations down. I them removed the entire file structure of postgresql-8.1.3 from my system. I then did a system search for pg_dump again to confirm that all files by the name of pg_dump were removed, which they were. I then re-installed PostgreSQL version 8.1.3. After completing, I did a system search for the pg_dump again and found them in the locations I expected. I them recreated my database and tested the pg_dump. I got the same error. Version mismatch with the same version numbers as before. I think that an old version of pg_dump is bundled up with the install of version 8.1.3. How can I get the correct version of pg_dump? Or any of the other files that are not the correct version? What is the result of the following: pg_dump --version pg_dumpall --version psql --version which pg_dump which pg_dumpall which psql What method did you use to search for files? Cheers, Steve ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Some pgbench results
Jim, I did another test with ext3 using data=writeback, and indeed it's much better: Avg:429.87 Stdev: 77 A bit (very tiny bit) faster than xfs and bit slower than jfs. Still, very much improved. Bye, Guy. On 3/23/06, Jim Nasby [EMAIL PROTECTED] wrote: On Mar 23, 2006, at 11:32 AM, Bernhard Weisshuhn wrote: Just Someone wrote: 2 10K SCSI disks in RAID1 for OS and WAL (with it's own partiton on ext3), You'll want the WAL on its own spindle. IIRC a separate partition on a shared disc won't give you much benefit. The idea is to keep the disc's head from moving away for other tasks. Or so they say. Actually, the OS partitions are normally quiet enough that it won't make a huge difference, unless you're really hammering the database all the time. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- Family management on rails: http://www.famundo.com - coming soon! My develpment related blog: http://devblog.famundo.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] version problem with pg_dump
Brian Kitzberger [EMAIL PROTECTED] writes: which pg_dump returned /usr/bin/pg_dump which pg_dumpall returned /usr/bin/pg_dump which psql returned /usr/bin/psql Didn't you say that you had installed PG 8.1.3 in a separate directory tree (something about postgresql-8.1.3)? Maybe your problem is not having changed your PATH to find those files before the default ones in /usr/bin. I have never installed any other version than 8.1.3. No, but 7.4.8 very possibly could have come with your OS. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Hi,
On Thu, 2006-03-23 at 13:03, Brian Kitzberger wrote: Hi Tom, I decided to test your theory that I had an old version of Postgres on my system when I installed version 8.1.3. By the way, the Linux install we a fresh one to start with. So this morning I first did a search on OK, assuming you're on a package based system (I seem to remember seeing redhat earlier) you check to see if something is installed or not is with RPM. try this: rpm -qa | grep -i postgres and see what you get. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] COPY command documentation
On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote: On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote: I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html No, comments don't roll forward. ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. I'd like to make a Modest Proposalâ¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. What say? Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [DOCS] [GENERAL] COPY command documentation
On Thu, 2006-03-23 at 14:12, David Fetter wrote: On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote: On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote: I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html No, comments don't roll forward. ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. I'd like to make a Modest Proposalâ„¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. Heck, why not a form that does it for somebody, takes their email address, and possibly even enrolls them in the -docs newsgroup. It can't be that hard to code up. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [DOCS] [GENERAL] COPY command documentation
Scott Marlowe wrote: On Thu, 2006-03-23 at 14:12, David Fetter wrote: On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote: On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote: I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html No, comments don't roll forward. ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. I'd like to make a Modest Proposalâ„¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. Heck, why not a form that does it for somebody, takes their email address, and possibly even enrolls them in the -docs newsgroup. It can't be that hard to code up. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly I am a Windows only developer (for my sins) and to be honest we are using postgres allot and are impressed by it but the *NIX centric examples in the docs can sometimes be a challenge, if there is a clear difference between Windows/*NIX operation and it is not obvious what the Windows equivalent is then there should be a Windows example or a note within the example explaining the difference. It seems like the viewing docs with notes is far less useful if we loose all the wisdom every time there is a new release, and personally I have found the notes useful more than once. Maybe a did you find this note useful button and useful notes get priority for doc inclusion? What is the process for submitting a doc patch? I am sure I should be asking that on the doc list. Oisin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [DOCS] [GENERAL] COPY command documentation
On Thu, Mar 23, 2006 at 03:31:42PM -0500, Oisin Glynn wrote: I am a Windows only developer (for my sins) and to be honest we are using postgres allot and are impressed by it but the *NIX centric examples in the docs can sometimes be a challenge, if there is a clear difference between Windows/*NIX operation and it is not obvious what the Windows equivalent is then there should be a Windows example or a note within the example explaining the difference. You're right, but the number of UNIX developers here far exceeds the number of Windows developers. Most UNIX developers (like me) would have absolutly no idea which examples would not translate obviously to Windows. For that you need someone experienced in Windows programming to indicate that. We could probably use a section in the docs indicating the major differences. Maybe it's there already, I can't see it straight off. Maybe a did you find this note useful button and useful notes get priority for doc inclusion? That would be nice, no idea how to implement it though. Maybe even sharing the comments across version would be good, except that the page names have changed over time. What is the process for submitting a doc patch? I am sure I should be asking that on the doc list. Go to the source tree, edit the SGML files and send the diff to pgsql-patches. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
Re: [GENERAL] Logging of sql statements?
On Thu, 23 Mar 2006 17:21:38 +0100 Andreas Kretschmer [EMAIL PROTECTED] wrote: Bjørn T Johansen [EMAIL PROTECTED] schrieb: Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas This is what I am already using and it doesn't work... It works for me ;-) test=# prepare my_foo(int) as select * from foo where id = $1; PREPARE test=# test=# execute my_foo(1); id | x + 1 | 10 (1 row) In my log: LOG: statement: prepare my_foo(int) as select * from foo where id = $1; LOG: statement: execute my_foo(1); [client PREPARE: prepare my_foo(int) as select * from foo where id = $1;] test=# select version(); version -- PostgreSQL 8.1.2 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.3 20060104 (prerelease) (Debian 4.0.2-6) (1 row) HTH, Andreas If I try the same from a client, I get the same result.. But when running from my webapp (using Hibernate), only question marks appear? BTJ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] version problem with pg_dump
Brian Kitzberger wrote: Hi Steve, pg_dump --versionreturned 7.4.8 pg_dumpall --version returned 7.4.8 psql *version returned 7.4.8 which pg_dump returned /usr/bin/pg_dump which pg_dumpall returned /usr/bin/pg_dump which psql returned /usr/bin/psql To find the file I used from the root find . -name pg_dump It would be unusual for the files to be in those locations if you did the usual ./configure ; make ; make install. How, exactly, did you install PostgreSQL? As Scott suggested, try running rpm -qa | grep -i postgres (assuming rpm is at the core of your package management) and when you find that it was already installed, use rpm to erase it. Here, things can get interesting. While you may not have intentionally installed PostgreSQL, your package manager may have installed it for you to solve a dependency (PHP with PG support or some app that requires PG for example) you may find your package manager complaining about dependencies when you try to remove it. But worry about burning that bridge when you get to it. Cheers, Steve ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Logging of sql statements?
On Thu, 2006-03-23 at 14:50, Bjørn T Johansen wrote: On Thu, 23 Mar 2006 17:21:38 +0100 Andreas Kretschmer [EMAIL PROTECTED] wrote: Bjørn T Johansen [EMAIL PROTECTED] schrieb: Set in your postgresql.conf: log_statement = all Since 8.0 oder 8.1, there you can see parameters in prepared statements. HTH, Andreas This is what I am already using and it doesn't work... It works for me ;-) test=# prepare my_foo(int) as select * from foo where id = $1; PREPARE test=# test=# execute my_foo(1); id | x + 1 | 10 (1 row) In my log: LOG: statement: prepare my_foo(int) as select * from foo where id = $1; LOG: statement: execute my_foo(1); [client PREPARE: prepare my_foo(int) as select * from foo where id = $1;] test=# select version(); version -- PostgreSQL 8.1.2 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.3 20060104 (prerelease) (Debian 4.0.2-6) (1 row) HTH, Andreas If I try the same from a client, I get the same result.. But when running from my webapp (using Hibernate), only question marks appear? Again, I'm pretty sure this is a known issue with the JDBC driver version. If you can force your JDBC driver to run under the previous protocol, you should get the old behaviour. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
[EMAIL PROTECTED] (Scott Marlowe) writes: http://bugs.mysql.com/bug.php?id=13301 And as Heikki Tuuri is no longer with them, I'll bet that doesn't get changed any time soon... -- let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];; http://cbbrowne.com/info/finances.html Rules of the Evil Overlord #68. I will spare someone who saved my life sometime in the past. This is only reasonable as it encourages others to do so. However, the offer is good one time only. If they want me to spare them again, they'd better save my life again. http://www.eviloverlord.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] index scan backward plan question
Neil Conway wrote: There is no such thing as PG 8.3. I meant 8.1.3 please provide the queries that trigger the problem and the relevant schema definitions. Sorry about not posting more details initially. I was running out the door and was hasty. Try re-running ANALYZE and retrying EXPLAIN ANALYZE. It was analyzed (and unmodified since analysis). I also rebuilt the table from scratch - same results. I'm beginning to think my performance problem is mostly related to tight memory. The same query that was insanely slow on one machine (1GB RAM) was reasonably fast on another with more memory (8GB RAM). Also, when I freed a bit more memory on the original machine, the performance wall went away. PostgreSQL seems to be making OK decisions about when to use the 'index scan backward' plan, but under tight memory conditions, the index scan would take forever while the alternative would be fine. My curiosity has sunk below my need to get real work done, so I don't require any further response. If anybody has too much time on his or her hands, you can read what I wrote (below) before I decided it wasn't worth pursuing this any further. Thanks, Kevin Murphy \d merged Table public.merged Column| Type | Modifiers --+-+--- pmid | integer | first_author | text| authors | text| title| text| journal | text| date | date| display_date | text| volume | text| issue| text| pages| text| symbol | text| weight | real| Indexes: merged_data_idx btree (date, first_author) merged_first_author_idx btree (first_author) merged_pmid_idx btree (pmid) merged_symbol_idx btree (symbol) merged_weight_date_idx btree (weight, date) The query in question searches on 'symbol', orders by the combination of 'weight' and 'date', and constrains by a limit and offset. The table has 7,839,274 rows; 16,247 distinct symbol values; 17,279 distinct weight values; 831,007 distinct weight-date combinations. Depending on the query search term and offset value, one of two different plans is chosen. For search terms matching large numbers of rows, and with a low enough offset, an 'index scan backward' plan is chosen, which is sometimes extremely fast (faster than the other plan). However, for the search terms matching very large numbers of rows, the 'index scan backward' plan can be molasses-slow (as much as 2 and 3 orders of magnitude). During such slow queries, the CPU is not doing much: postgres is consuming =3% of CPU, and nothing else is happening on the box ('top' is using the most CPU). Disk I/O is consistent and low at around 1MB/sec while this is going on. Top doesn't show any swapping, although free physical memory is very low, but on the other hand there is lots of inactive memory (720MB). During these very long queries, I see a lock on merged_weight_date_idx (which is 168 MB in size, BTW) the whole time. There are no messages in the system or postgresql logs. Can it really be taking PG 490 seconds (see below) to read some fraction of 168MB into memory? * * * * * * * * postgresql.conf settings: shared_buffers = 15000 work_mem = 32768 maintenance_work_mem = 65536 default_statistics_target = 1000 I just noticed that default_statistics_target is set at one extreme. Since many of the columns have large numbers of values, highly skewed distributions, and the oddballs are often important, it was my understanding that this was not unreasonable (and early tests seemed to bear this out). I stand ready to be corrected. * * * * * * * * OS: OS X 10.4, 1 GB RAM, dual 1.8 Ghz PPC * * * * * * * * The following plan is produced for almost all values of 'symbol' except the values with the most records. It is also produced for the high-frequency symbols if the OFFSET is large enough. The following is a search for a less common symbol than the problematic cases ('CFTR' matches 8920 rows): EXPLAIN ANALYZE SELECT pmid, authors, title, journal, date, display_date, volume, issue, pages FROM merged WHERE symbol = 'CFTR' ORDER BY weight DESC, date DESC OFFSET 0 LIMIT 25; QUERY PLAN --- Limit (cost=29364.77..29364.83 rows=25 width=229) (actual time=1351.026..1351.040 rows=25 loops=1) - Sort (cost=29364.77..29385.54 rows=8310 width=229) (actual time=1351.023..1351.028 rows=25 loops=1) Sort Key: weight, date - Bitmap Heap Scan on merged (cost=55.08..28823.76 rows=8310 width=229) (actual time=60.076..1198.231 rows=8920 loops=1) Recheck Cond: (symbol = 'CFTR'::text) - Bitmap Index Scan on merged_symbol_idx
Re: [GENERAL] Logging of sql statements?
On Thu, 23 Mar 2006 15:01:17 -0600 Scott Marlowe [EMAIL PROTECTED] wrote: If I try the same from a client, I get the same result.. But when running from my webapp (using Hibernate), only question marks appear? Again, I'm pretty sure this is a known issue with the JDBC driver version. If you can force your JDBC driver to run under the previous protocol, you should get the old behaviour. So basically, I need a JDBC driver like version 7.4 Build 216 instead of my version 8.1 Build 405 driver? What about 8.2dev Build 501, does it fix it? And if so, how stable is that version? BTJ ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] prepared SELECT and placeholders with NULL values
Hi, I'm having an issue using NULL values to fill placeholders in a prepared SELECT statement. My table looks something like this: CREATE TABLE person (id serial primary key, lname text not null, fname text); Given queries like this (using the Perl DBI+DBD::Pg interface): $i = $db-prepare('INSERT INTO person (lname, fname) VALUES (?, ?)'); $s = $db-prepare('SELECT id FROM person WHERE lname = ? AND fname = ?'); These insert operations work fine: $i-execute('Bono', 'Sonny'); $i-execute('Cher', undef); This select works properly as well, returning the appropriate id value: $s-execute('Bono', 'Sonny'); But this does not, returning an empty list: $s-execute('Cher', undef); My environment: - PostgreSQL 8.1.3 - Perl 5.8.8 - DBI 1.50 - DBD::Pg 1.43 Any ideas? Thanks. - Mike ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] version problem with pg_dump
Steve, Okay, not only am I new to PostgreSQL but I am new to Linux with a little experience years ago with Unix. So I didn't know about rpm nor does any one else here. But anyway, the result of running rpm is: postgresql-libs-7.4.8-0.6 postgresql-server-7.4.8-0.6 postgresql-jdbc-7.3-189.1 postgresql-7.4.8-0.6 qt3-postgresql-3.3.1-35.11 I did an ls in the /usr/bin directory and sure enough there are the other files I could not find before. So I guess I will have to cross the bridge. As a test, I just mv the file /usr/bin/pg_dump. If rpm had a dependence on that file would it cause some kind of error message in trying to move it? I was able to successfully move the file to the root. Brian Steve Crawford [EMAIL PROTECTED] 3/23/2006 12:50 PM Brian Kitzberger wrote: Hi Steve, pg_dump --versionreturned 7.4.8 pg_dumpall --version returned 7.4.8 psql *version returned 7.4.8 which pg_dump returned /usr/bin/pg_dump which pg_dumpall returned /usr/bin/pg_dump which psql returned /usr/bin/psql To find the file I used from the root find . -name pg_dump It would be unusual for the files to be in those locations if you did the usual ./configure ; make ; make install. How, exactly, did you install PostgreSQL? As Scott suggested, try running rpm -qa | grep -i postgres (assuming rpm is at the core of your package management) and when you find that it was already installed, use rpm to erase it. Here, things can get interesting. While you may not have intentionally installed PostgreSQL, your package manager may have installed it for you to solve a dependency (PHP with PG support or some app that requires PG for example) you may find your package manager complaining about dependencies when you try to remove it. But worry about burning that bridge when you get to it. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] version problem with pg_dump
On March 23, 2006 01:32 pm, Brian Kitzberger [EMAIL PROTECTED] wrote: Steve, Okay, not only am I new to PostgreSQL but I am new to Linux with a little experience years ago with Unix. So I didn't know about rpm nor does any one else here. But anyway, the result of running rpm is: postgresql-libs-7.4.8-0.6 postgresql-server-7.4.8-0.6 postgresql-jdbc-7.3-189.1 postgresql-7.4.8-0.6 qt3-postgresql-3.3.1-35.11 I did an ls in the /usr/bin directory and sure enough there are the other files I could not find before. So I guess I will have to cross the bridge. As a test, I just mv the file /usr/bin/pg_dump. If rpm had a dependence on that file would it cause some kind of error message in trying to move it? I was able to successfully move the file to the root. RPM won't say anything unless you run rpm commands (ie. rpm -e package to remove it). The only one of those you're likely to have a dependency problem with is postgresql-libs. perl-DBD-Pg, and possibly a few other packages (like PHP, as a previous poster mentioned), will be linked to that. -- Alan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] version problem with pg_dump
Steve, You asked how I built the my install of 8.1.3. With the tar files at the root, I used the gunzip and tar commands from the web site on the base, docs, opt, and test tar files as suggested by the PostgreSQL.org web site, which made the postgresql-8.1.3 directory. I then did the steps suggested to do the install with slight variation. ./configure (I had to use the option --without-readline because it gave an error without it) gmake su gmake install useradd postgres mkdir /usr/local/pgsql/data chown postgres /usr/local/pgsql/data su - postgres /usr/local/pgsql/bin/initdb -i -D /usr/local/pgsql/data (the -i options was suggesed) /usr/local/pgsql/bin/postmaster -i -D /usr/local/pgsql/data logfile 21 /usr/local/pgsql/bin/psql test It worked fine. I was able to create a database from a DDL I wrote and do insert into the tables and selects with correct results. So I was testing the pg_dump with I ran into problems. Brian Steve Crawford [EMAIL PROTECTED] 3/23/2006 12:50 PM Brian Kitzberger wrote: Hi Steve, pg_dump --versionreturned 7.4.8 pg_dumpall --version returned 7.4.8 psql *version returned 7.4.8 which pg_dump returned /usr/bin/pg_dump which pg_dumpall returned /usr/bin/pg_dump which psql returned /usr/bin/psql To find the file I used from the root find . -name pg_dump It would be unusual for the files to be in those locations if you did the usual ./configure ; make ; make install. How, exactly, did you install PostgreSQL? As Scott suggested, try running rpm -qa | grep -i postgres (assuming rpm is at the core of your package management) and when you find that it was already installed, use rpm to erase it. Here, things can get interesting. While you may not have intentionally installed PostgreSQL, your package manager may have installed it for you to solve a dependency (PHP with PG support or some app that requires PG for example) you may find your package manager complaining about dependencies when you try to remove it. But worry about burning that bridge when you get to it. Cheers, Steve ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Logging of sql statements?
On Thu, 2006-03-23 at 15:19, Bjørn T Johansen wrote: On Thu, 23 Mar 2006 15:01:17 -0600 Scott Marlowe [EMAIL PROTECTED] wrote: If I try the same from a client, I get the same result.. But when running from my webapp (using Hibernate), only question marks appear? Again, I'm pretty sure this is a known issue with the JDBC driver version. If you can force your JDBC driver to run under the previous protocol, you should get the old behaviour. So basically, I need a JDBC driver like version 7.4 Build 216 instead of my version 8.1 Build 405 driver? What about 8.2dev Build 501, does it fix it? And if so, how stable is that version? The problems is not in the driver itself, it's associated with the change in the front end / back end protocol. I.e. With 8.x the protocol was upped to V3. V2 fe/be protocol communications still get logged with the parameters, but V3 don't. There might be a setting in the jdbc driver to change this, but I'm not real familiar with the JDBC driver really, I just use it. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] version problem with pg_dump
On Thu, 2006-03-23 at 15:32, Brian Kitzberger wrote: Steve, Okay, not only am I new to PostgreSQL but I am new to Linux with a little experience years ago with Unix. So I didn't know about rpm nor does any one else here. But anyway, the result of running rpm is: Hey, we all started somewhere. Welcome to the club, eh? postgresql-libs-7.4.8-0.6 postgresql-server-7.4.8-0.6 postgresql-jdbc-7.3-189.1 postgresql-7.4.8-0.6 qt3-postgresql-3.3.1-35.11 I did an ls in the /usr/bin directory and sure enough there are the other files I could not find before. So I guess I will have to cross the bridge. As a test, I just mv the file /usr/bin/pg_dump. If rpm had a dependence on that file would it cause some kind of error message in trying to move it? I was able to successfully move the file to the root. Nah, RPM won't stop you doing things like that. It will, however, let you know files are missing if you know the commands to throw at it. Take a look here: http://www.rpm.org/ Also, if you're gonna be using linux and postgresql, I'd recommending downloading and installing some fairly recent versions of each. For linux distros, there are hundreds of choices. Fedora Core 5 just came out, but 4 is much more stabilized now. Debian, Suse, Ubuntu are all good distros. You can get RedHat Enterprise clones called white box linux or centos which are basically exactly the same with different names inside them. Then you can just install postgresql with the yum package manager manager with a command like: yum install postgres* and that's it. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] version problem with pg_dump
Brian Kitzberger wrote: Steve, You asked how I built the my install of 8.1.3. With the tar files at the root, I used the gunzip and tar commands from the web site on the base, docs, opt, and test tar files as suggested by the PostgreSQL.org web site, which made the postgresql-8.1.3 directory. I then did the steps suggested to do the install with slight variation. ./configure (I had to use the option --without-readline because it gave an error without it) If you install the readline development files (ie. rpm -i readline-devel-version.rpm or use YAST or whatever is appropriate for your distro) then you won't get this error. It basically only affects command editing and history in psql. gmake su gmake install useradd postgres mkdir /usr/local/pgsql/data chown postgres /usr/local/pgsql/data su - postgres /usr/local/pgsql/bin/initdb -i -D /usr/local/pgsql/data (the -i options was suggesed) /usr/local/pgsql/bin/postmaster -i -D /usr/local/pgsql/data logfile 21 /usr/local/pgsql/bin/psql test It worked fine. I was able to create a database from a DDL I wrote and do insert into the tables and selects with correct results. So I was testing the pg_dump with I ran into problems. And had you run /usr/local/pgsql/bin/pg_dump it would have worked fine as well. But /usr/local/pgsql/bin is probably not in your $PATH at all let alone existing ahead of /usr/bin so just running pg_dump loaded the incorrect version. My quick-n-dirty fix is to make symbolic links in /usr/bin for all pg programs: cd /usr/local/pgsql/bin for x in * ; do ln -s /usr/local/pgsql/bin/$x /usr/bin/$x ; done But be sure to remove the out-of-date version first. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] version problem with pg_dump
Steve Crawford [EMAIL PROTECTED] writes: Brian Kitzberger wrote: It worked fine. I was able to create a database from a DDL I wrote and do insert into the tables and selects with correct results. So I was testing the pg_dump with I ran into problems. And had you run /usr/local/pgsql/bin/pg_dump it would have worked fine as well. But /usr/local/pgsql/bin is probably not in your $PATH at all let alone existing ahead of /usr/bin so just running pg_dump loaded the incorrect version. For that matter, he was presumably running the 7.4 version of psql and so on, which means that those things also only worked for rather small values of work --- a lot of psql 7.4's backslash commands would likely fail against an 8.1 server for instance. My quick-n-dirty fix is to make symbolic links in /usr/bin for all pg programs: The *right* solution if you're using an RPM-based Linux distro is to grab an RPM distribution of Postgres; trying to make end runs around RPM is a great way to turn your system into a hopeless mess. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [DOCS] [GENERAL] COPY command documentation
On Thursday 23 March 2006 15:12, David Fetter wrote: On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote: On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote: I just discovered that the comments from 8.0 had the answer I was looking for but these comments are not in the 8.1 docs. Should the comments be rolled forward as new versions are created? Or if valid comments added to the docs themselves? http://www.postgresql.org/docs/8.1/interactive/sql-copy.html http://www.postgresql.org/docs/8.0/interactive/sql-copy.html No, comments don't roll forward. The general consensus is that comments should not automatically roll forward, since many comments are version specific. People are of course free to repost comments if they find them appropriate. ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. Well, we do make some attempt at rolling comments into the docs where appropriate, but we could certainly use more dedicated contributors in that area. I'd like to make a Modest Proposalâ¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. What say? I'd say you're anti-interactive comments :-) More importantly, people just aren't going to to write patches for doc additions... the overhead is several orders of magnitudes greater than filling at a web form... so getting rid of the comments is sure to lose any gains that we receive. What I have tried to garner support for in the past was to either direct those submission to this group for approval/rejection, which would make the folks generally interested in docs directly involved in the process. The other option would be to mail approved doc comments to this group so that someone could work them up into doc patches if applicable. That really is a factor, most of the comments would need to be reworded to be added into the docs proper. In the past these ideas were rejected as either off-topic or that it would turn this list into a high traffic list... if peoples opinions have changed, it could be arranged. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] version problem with pg_dump
My quick-n-dirty fix is to make symbolic links in /usr/bin for all pg programs: But, as I noted, only after you are sure you have removed all vestiges of the old version. The symbolic links are just a convenience. The *right* solution if you're using an RPM-based Linux distro is to grab an RPM distribution of Postgres; trying to make end runs around RPM is a great way to turn your system into a hopeless mess. You sure can turn a system into a hopeless mess but I don't agree that I would only use RPM to install PG - that depends on the situation. In my case the distros may use RPM as the package manager and RPM is fine for the base configuration but I am starting with the bare minimum default installation, hardening/stripping that down some more and then compiling PG from source. The server has one purpose - running PostgreSQL as a stand-alone server for clients on the network. Because of this there are no PG dependent packages installed to start with. PG is critical to our business and I find that compiling from source gives me the ability to deploy updates more quickly if necessary and to customize the options I use to build PG where necessary. Cheers, Steve ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Build only clients
We have a dedicated PostgreSQL server but a variety of client machines ranging from soon to be retired SCO 5.0.x to SuSE 10. What is the recommended method to build/install/deploy only the client-side libraries and utilities (psql, pg_dump, etc.) and not the server? We have a development box for each supported client OS which would also need headers but none of them need the server. If I overlooked something in the docs just point me the right direction. Thanks, Steve ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Partitioning - when is it too many tables?
I'm working on a database that will (at current load) add 100 million records per day to a database, and needs to keep around 6 months of data online. Of course, we don't want the loads to be running all day while queries are going on. Using COPY with indexes active runs great with an empty database, then drops precipitously as the index size grows. It looks like I should be able to hit the numbers if I insert the data using COPY with no indexes, then add the indexes. I'm looking at partitioning with one table per day. So, we'd be looking at about 180 tables with 100 million rows each. Searches would typically be within a single day, although they could span multiple days. This keeps the indexes a more or less reasonable size, and allows quick deleting of the old data. Is there any problem with 180 child tables? How many would be too many (e.g. if I did one table per 6 hours?) Thanks Wes ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [DOCS] [GENERAL] COPY command documentation
On Thu, Mar 23, 2006 at 04:46:02PM -0500, Robert Treat wrote: ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. Well, we do make some attempt at rolling comments into the docs where appropriate, but we could certainly use more dedicated contributors in that area. If we're going to get dedicated contributors, we could direct their efforts to things a *lot* more productive than this. Improving the formal docs, for example. I'd like to make a Modest Proposalâ¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. What say? I'd say you're anti-interactive comments :-) I'm not against them. I'm just *for* improving the existing docs, and those comments don't (and won't, very likely) have any pipeline into those. Are you personally volunteering for this task, Robert? More importantly, people just aren't going to to write patches for doc additions... the overhead is several orders of magnitudes greater than filling at a web form... so getting rid of the comments is sure to lose any gains that we receive. What gains? As I said, I'm not against it, but right now, those things just go down the memory hole to the benefit of nobody. The detriment, I'd say, because somebody has wasted their time. What I have tried to garner support for in the past was to either direct those submission to this group for approval/rejection, which would make the folks generally interested in docs directly involved in the process. Somebody has to vet this. Please feel free to step up :) The other option would be to mail approved doc comments to this group so that someone could work them up into doc patches if applicable. That really is a factor, most of the comments would need to be reworded to be added into the docs proper. In the past these ideas were rejected as either off-topic or that it would turn this list into a high traffic list... if peoples opinions have changed, it could be arranged. I'm voicing a rejection for 'em again on the same grounds. Until we have a person whose paid, full-time job is web-comment rassling, this is a non-starter. Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Some pgbench results
Hi, Did you re-initialize the test pgbench database between runs? I get weird results otherwise since some integers gets overflowed in the test (it doesn't complete the full 1 transactions after the first run). No, I didn't. The reason is that I noticed that the first run is always MUCH faster. My initial runs if I reinit pgbench and run again will always hover around 900-970 tps for xfs. And I didn't need this as a real performance test, it was a side effect of a load test I was doing on the server. Also, pgbench isn't close to the load I'll see on my server (web application which will be mostly read) Could you please tell me what stripe size you have on the raid system? Could you also share the mkfs and mount options on each filesystem you tried? RAID stripe size of 256K. File system creation: xfs: mkfs -t xfs -l size=64m /dev/md0 jfs: mkfs -t jfs /dev/md0 Mount for xfs with -o noatime,nodiratime,logbufs=8 jfs: -o noatime,nodiratime A hint on using a raided ext3 system is to use whole block device instead of partitions to align the data better and use data=journal with a big journal. This might seem counter-productive at first (it did to me) but I increased my throughput a lot when using this. Thanks for the advice! Actually, the RAID 10 I have is mounted as /var/lib/pgsql, so it's ONLY for postgres data, and the pg_xlog directory is mounted on another disk. My filesystem parameters are calculated like this: stripe=256 # - 256k raid stripe size bsize=4 # 4k blocksize bsizeb=$(( $bsize * 1024 )) # in bytes stride=$(( $stripe / $bsize )) mke2fs -b $bsizeb -j -J size=400 -m 1 -O sparse_super \ -T largefile4 -E stride=$stride /dev/sdb Mounted with: mount -t ext3 -o data=journal,noatime /dev/sdb /mnt/test8 That's an interesting thing to try, though because of other things I want, I prefer xfs or jfs anyway. I will have an extreme number of schemas and files, which make high demands on the directory structure. My tests showed me that ext3 doesn't cope with many files in directories very well. With xfs and jfs I can create 500K files in one directory in no time (about 250 seconds), with ext3 it start to crawl after about 30K files. I'm a little surprised that I can get more pgbench performance out of my system since you're using 10K scsi disks. Please try the above settings and see if it helps you... I've not run so many tests yet, I'll do some more after the weekend... Please share the results. It's very interesting... Bye, Guy. BTW, one thing I also tested is a software RAID0 over two RAID5 SATA arrays. Total disk count in this is 15. The read performance was really good. The write performance (as expected) not so great. But that was just a test to get a feeling of the speed. This RAID5 system is only used for file storage, not database. -- Family management on rails: http://www.famundo.com - coming soon! My development related blog: http://devblog.famundo.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Partitioning - when is it too many tables?
On 3/23/06, Wes [EMAIL PROTECTED] wrote: This keeps the indexes a more or less reasonable size, and allows quick deleting of the old data. Is there any problem with 180 child tables? How many would be too many (e.g. if I did one table per 6 hours?) I am not a guru. Many, many people on the list have more experience with Table Partitioning (CE), however I will share my experience. I did some work with CE for our syslog data collection with some nice results. * I found that making functions to do the setup work for CE is a must. * I also found that using 1 rule per table to do query rewriting can become a bottleneck. This was fine for a small case (20-30 rules), but it didn't scale nicely when we had 300 rules. I made 1 table per day and got a nice boost in query speed. So in short, I think this will work nicely for you. COPY does not invoke rules, so you should be fine. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] PostgreSQL 8.1 v. Oracle 10g xe
Recently I have been attempting to install Compiere. After spending a lot of time attempting to get it to work, I have given up installing it on PostgreSQL. I did however, get a little experience installing JDBC and PL/Java in the mean time. I discovered in the process of attempting to install Oracle 10g express edition that it is an extraordinary memory hog. One might have thought that one might have something that might have lower system requirements than the full version. However, this is not the case. The installation itself is about 2GB (meaning you need at least 2GB of free space and 352 MB RAM. The installation failed because it said I needed at least 700MB swap space to install on Linux based on these resources. Am I the only one who finds this disturbing? Why should an RDBMS require so much swap space? After this experience, I have a much more profound appreciation for the quality RDBMS that is PostgreSQL. PostgreSQL will install fine on any of my computers if I need it to. In short, thanks guys for such an awesome RDBMS. PostgreSQL will always remain my favorite. Best Wishes, Chris Travers Metatron Technology Consulting begin:vcard fn:Chris Travers n:Travers;Chris email;internet:[EMAIL PROTECTED] tel;work:509-888-0220 tel;cell:509-630-9974 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [DOCS] [GENERAL] COPY command documentation
On Thursday 23 March 2006 17:46, David Fetter wrote: On Thu, Mar 23, 2006 at 04:46:02PM -0500, Robert Treat wrote: ...and it's unlikely that they will, now or later, without somebody whose whole job is to monitor those comments and make patches. Well, we do make some attempt at rolling comments into the docs where appropriate, but we could certainly use more dedicated contributors in that area. If we're going to get dedicated contributors, we could direct their efforts to things a *lot* more productive than this. Improving the formal docs, for example. Uh, that's what we're talking about David, having someone who would be willing to take doc comments and roll them into the formal docs. I'd like to make a Modest Proposalâ¢: Let's take down the interactive documents and, in their place, put up a request that doc patches be sent to -docs. What say? I'd say you're anti-interactive comments :-) I'm not against them. I'm just *for* improving the existing docs, and those comments don't (and won't, very likely) have any pipeline into those. Are you personally volunteering for this task, Robert? Well David, I have actually already submitted multiple patches to the docs directly based on documentation comments, check the archives. Also I know Tom has gone through a number of times in the past in an attempt to cull improvements. The things is, we're busy guys, so we can't exactly do it alone. If we could get some more volunteers, the process would be better. Even if we can't I still think it is worthwhile, but I'll cover that more in a bit. More importantly, people just aren't going to to write patches for doc additions... the overhead is several orders of magnitudes greater than filling at a web form... so getting rid of the comments is sure to lose any gains that we receive. What gains? As I said, I'm not against it, but right now, those things just go down the memory hole to the benefit of nobody. The detriment, I'd say, because somebody has wasted their time. Not true. First, there have been doc improvements based on those comments. Furthermore, people do find the doc comments helpful; they find information clearing things up online and when google searching. In fact more people could be helped if things like the docbot pointed to the interactive docs, though for some reason the guys running that thing refuse to do so. What I have tried to garner support for in the past was to either direct those submission to this group for approval/rejection, which would make the folks generally interested in docs directly involved in the process. Somebody has to vet this. Please feel free to step up :) Um, maybe I wasn't clear when I said I have tried to garner support, but I have tried to garner support, and it got shot down. Can it be your turn now? The other option would be to mail approved doc comments to this group so that someone could work them up into doc patches if applicable. That really is a factor, most of the comments would need to be reworded to be added into the docs proper. In the past these ideas were rejected as either off-topic or that it would turn this list into a high traffic list... if peoples opinions have changed, it could be arranged. I'm voicing a rejection for 'em again on the same grounds. Until we have a person whose paid, full-time job is web-comment rassling, this is a non-starter. Well there you go. You complain that the interactive docs aren't merged upstream enough, but protest any effort to get subscribers from the _docs mailing list_ involved. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Build only clients
Steve Crawford wrote: We have a dedicated PostgreSQL server but a variety of client machines ranging from soon to be retired SCO 5.0.x to SuSE 10. What is the recommended method to build/install/deploy only the client-side libraries and utilities (psql, pg_dump, etc.) and not the server? We have a development box for each supported client OS which would also need headers but none of them need the server. If I overlooked something in the docs just point me the right direction. Thanks, Steve The INSTALL file in the tar file suggests: Client-only installation: If you want to install only the client applications and interface libraries, then you can use these commands: gmake -C src/bin install gmake -C src/include install gmake -C src/interfaces install gmake -C doc install -- Postgresql php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Some pgbench results
I played a bit with kernnel versions as I was getting a kernel panic on my Adaptec card. I downgraded to 2.6.11 (the original that came with fedora core 4) and the panic went away, but more than that, the performance on XFS went considerably higher. With the exact same settings as before, I got now Average of 813.65tps with a standard deviation of: 130.33. I hope this kernel doesn't panic on me. But I'll know just tomorrow as I'm pounding on the machine now. Bye, Guy. On 3/23/06, Magnus Naeslund(f) [EMAIL PROTECTED] wrote: Just Someone wrote: Initialized the data with: pgbench -i -s 100 Test runs: pgbench -s 100 -t 1 -c 20 I did 20 runs, removed the first 3 runs from each sample to account for stabilization. Did you re-initialize the test pgbench database between runs? I get weird results otherwise since some integers gets overflowed in the test (it doesn't complete the full 1 transactions after the first run). Here are the results in tps without connection establishing: FS: JFS XFS EXT3 Avg: 462 425 319 Stdev: 10474 106 Could you please tell me what stripe size you have on the raid system? Could you also share the mkfs and mount options on each filesystem you tried? I ran some tests on an somewhat similar system: A supermicro H8SSL-i-B motherboard with one dual core opteron 165 with 4gb of memory, debian sarge amd64 (current stable) but with a pristine kernel.org 2.6.16 kernel (there's no debian patches or packages yet). It has a 3ware 9550 + BBU sata raid card with 6 disks in a raid 10 configuration with 256kb stripe size. I think this results in about 200mb/s raw read performance and about 155mb/s raw write performance (as in tested with dd:ing a 10gb file back and forth). I had no separate WAL device/partition, only tweaked postgresql.conf. I get about 520-530 tps with your pgbench parameters on ext3 but very poor (order of magnitude) performance on xfs (that's why I ask of your mkfs parameters). A hint on using a raided ext3 system is to use whole block device instead of partitions to align the data better and use data=journal with a big journal. This might seem counter-productive at first (it did to me) but I increased my throughput a lot when using this. My filesystem parameters are calculated like this: stripe=256 # - 256k raid stripe size bsize=4 # 4k blocksize bsizeb=$(( $bsize * 1024 )) # in bytes stride=$(( $stripe / $bsize )) mke2fs -b $bsizeb -j -J size=400 -m 1 -O sparse_super \ -T largefile4 -E stride=$stride /dev/sdb Mounted with: mount -t ext3 -o data=journal,noatime /dev/sdb /mnt/test8 I'm a little surprised that I can get more pgbench performance out of my system since you're using 10K scsi disks. Please try the above settings and see if it helps you... I've not run so many tests yet, I'll do some more after the weekend... Regards, Magnus -- Family management on rails: http://www.famundo.com - coming soon! My development related blog: http://devblog.famundo.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Parallel sequential scans
I'm doing some reporting-type work with PG, with the vast majority of queries hitting upwards of 25% of the table, so being executed as seq scans. It's a fairly large set of data, so each pass is taking quite a while, IO limited. And I'm looking at doing dozens of passes. It would be really nice to be able to do all the work with a single pass over the table, executing all the queries in parallel in that pass. They're pretty simple queries, mostly, just some aggregates and a simple where clause. There are some fairly obvious ways to merge multiple queries to do that at a SQL level - converting each query into a function and passing each row from a select * to each of the functions would be one of the less ugly. Or I could fire off all the queries simultaneously and hope they stay in close-enough lockstep through a single pass through the table to be able to share most of the IO. Is there a commonly used trick to doing this that I should know about? Cheers, Steve ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PostgreSQL 8.1 v. Oracle 10g xe
Chris Travers wrote: Recently I have been attempting to install Compiere. After spending a lot of time attempting to get it to work, I have given up installing it on PostgreSQL. I did however, get a little experience installing JDBC and PL/Java in the mean time. I discovered in the process of attempting to install Oracle 10g express edition that it is an extraordinary memory hog. One might have thought that one might have something that might have lower system requirements than the full version. However, this is not the case. The installation itself is about 2GB (meaning you need at least 2GB of free space and 352 MB RAM. The installation failed because it said I needed at least 700MB swap space to install on Linux based on these resources. Am I the only one who finds this disturbing? Why should an RDBMS require so much swap space? After this experience, I have a much more profound appreciation for the quality RDBMS that is PostgreSQL. PostgreSQL will install fine on any of my computers if I need it to. In short, thanks guys for such an awesome RDBMS. PostgreSQL will always remain my favorite. Best Wishes, Chris Travers Metatron Technology Consulting ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org I come from an Oracle 8.X/9.X on Windows background and to be honest we are moving towards Postgres for as many applications/projects as we can. We have to keep Oracle but now version Xe for the tougher to migrate parts and I have been shocked at how Oracle Xe slows down a medium spec pc/server. Oracle 8.x 9.x (Standard Edition) sure was slimmer!! And both were a factor fatter than 7.X Seems like maybe all the bloat was included and the features were just disabled. Roll on the happy day we dispense with Oracle completely. Many times a paid up Oracle support contract has been less help than searching the Postgres mailing list archives. And the mailing lists are probably as responsive to inquiry but with a more competent person answering the Postgres question. Oisin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0
Merlin Moncure [EMAIL PROTECTED] wrote In postgresql, queries executed over the parameterized/prepared C api are particularly fast...as much as a 70% speed reduction over vanilla PQexec. Does it mean 70% time is spent on planning? I am a little bit interested in this number. Can you specify what kind of queries or give some testing numbers. By the way, if so, we can do it by PREPARE statement in SQL. Regards, Qingqing ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Parallel sequential scans
Steve Atkins [EMAIL PROTECTED] writes: I'm doing some reporting-type work with PG, with the vast majority of queries hitting upwards of 25% of the table, so being executed as seq scans. ... It would be really nice to be able to do all the work with a single pass over the table, executing all the queries in parallel in that pass. They're pretty simple queries, mostly, just some aggregates and a simple where clause. There are some fairly obvious ways to merge multiple queries to do that at a SQL level - converting each query into a function and passing each row from a select * to each of the functions would be one of the less ugly. Or I could fire off all the queries simultaneously and hope they stay in close-enough lockstep through a single pass through the table to be able to share most of the IO. I have not tried this sort of thing, but right offhand I like the second alternative. The hope is more well-founded than you seem to think: whichever process is currently ahead will be slowed by requesting I/O, while processes that are behind will find the pages they need already in shared buffers. You should definitely see just one read of each table page as the parallel scans advance, assuming you don't have an unreasonably small number of buffers. Another reason, if you have more than one CPU in your machine, is that multiple processes can make use of multiple CPUs, whereas the one-fancy-query approach doesn't parallelize (at least not without Bizgres or some such). And lastly, you can just try it without sweating hard to convert the queries ;-). So try it and let us know how it goes. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Some pgbench results
Just Someone wrote: [snip] mke2fs -b $bsizeb -j -J size=400 -m 1 -O sparse_super \ -T largefile4 -E stride=$stride /dev/sdb Mounted with: mount -t ext3 -o data=journal,noatime /dev/sdb /mnt/test8 That's an interesting thing to try, though because of other things I want, I prefer xfs or jfs anyway. I will have an extreme number of schemas and files, which make high demands on the directory structure. My tests showed me that ext3 doesn't cope with many files in directories very well. With xfs and jfs I can create 500K files in one directory in no time (about 250 seconds), with ext3 it start to crawl after about 30K files. It might seem that I'm selling ext3 or something :) but it's the linux filesystem I know best. If you want ext3 to perform with large directories, there is an mkfs option that enables directory hashing that you can try: -O dir_index. Regards, Magnus ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] where to find pg_config....
running fedora 2... just installed postgresql 7.4.12 from postgresql website because the version that came with fedora core 2 did not have the pg_config in postgres-devel. now this one does not either. what gives? where do i find pg_config. i have applications and perl modules that will not install because they can not find it. i have searched the hard drive for 2 days and can't find it. HELP! PLEASE! Drives me nutty when things are not where it is said they are supposed to be. searched the web (google) and everyone says the same thing, install postgres-devel. it is not working. -- Clyde Swann [EMAIL PROTECTED] BlackSwann Information Management Systems
Re: [GENERAL] where to find pg_config....
Clyde Swann wrote: supposed to be. searched the web (google) and everyone says the same thing, install postgres-devel. it is not working. I'm sure they said install postgresql-devel. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] where to find pg_config....
Hi, On Fri, 2006-03-24 at 01:16 -0600, Clyde Swann wrote: running fedora 2... just installed postgresql 7.4.12 from postgresql website because the version that came with fedora core 2 did not have the pg_config in postgres-devel. now this one does not either. what gives? Fedora Core 2 RPMs (of Fedora) provide pg_config: [EMAIL PROTECTED] ~]$ rpm -qlp postgresql-devel-7.4.2-1.i386.rpm |grep pg_config /usr/bin/pg_config /usr/include/pg_config.h /usr/include/pg_config_manual.h /usr/include/pg_config_os.h /usr/include/pgsql/server/pg_config.h /usr/include/pgsql/server/pg_config_manual.h /usr/include/pgsql/server/pg_config_os.h /usr/share/man/man1/pg_config.1.gz However, I suggest you to upgrade to PGDG RPMs since even their legacy updates have 7.4.8, but the current version is 7.4.12: http://www.postgresql.org/ftp/binary/v7.4.12/rpms/fedora/fedora-core-2/ (postgresql-devel package includes pg_config) Also, you may want to upgrade to latest version: http://www.postgresql.org/ftp/binary/v8.1.3/linux/rpms/fedora/fedora- core-2/ Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly