Re: [GENERAL] wierd problems with DBI/DBD::pg?
On Mon, 30 Apr 2001 14:50:15 -0400, Michelle Murrain wrote: I recently upgraded from 6.5 to 7.1, and it mostly went smoothly (fixed the PHP problem, thanks to a list member). But now some of my perl stuff is being a bit strange, and I'm wondering whether other folks have noticed strangeness when they upgraded their Postgres as well as upgraded to the new DBD::Pg. First off, my error strings seem to be cut off - so when an error happens, I only get the first few (not at all predictable, either) characters from the error string. Which version of DBD::Pg is it? Versions 0.96 and onwards remove the newlines from the ends of errors, so that the file and line number which caused them is appended. My patch (in 0.96 and 0.97) did this by working from the end of the string, but the patch in 0.98 works from the beginning, and truncates the error at the first newline. This prevents the core dumps which my patch caused, but it will truncate multiline errors. If that's what's happening, I'll submit another patch which starts from the end, but doesn't coredump. Second, I have this strange situation where a script that does a query on one table of a database gives a reasonable result, but returns an internal server error when a different table is queried. (this worked prior to the upgrade) - plus, the wierd thing is that this script has a graceful exit subroutine, and shouldn't just barf like that. This could be the core dump problem that my patch caused, but you shouldn't be seeing both core dumps and truncated errors. -- Peter Haworth [EMAIL PROTECTED] The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language. -- D. E. Knuth, 1967 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Improve a query...
Eric G. Miller [EMAIL PROTECTED] writes: Looking for the best way to formulate a query to select the most recent entry for an organization in a table like: Take a look at the SELECT reference page's example for SELECT DISTINCT ON: : For example, : : SELECT DISTINCT ON (location) location, time, report : FROM weatherReports : ORDER BY location, time DESC; : : retrieves the most recent weather report for each location. A tad nonstandard, but bloody effective, particularly if you have an appropriate index in place... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Re: do functions cache views?
Thomas F. O'Connell [EMAIL PROTECTED] writes: If it's not those issues then we'll need more details --- preferably a self-contained example. ok. here are some more details: You still have not given an example that would allow someone else to reproduce the behavior you think is broken. Please see http://www.postgresql.org/devel-corner/docs/postgres/bug-reporting.html concerning what constitutes a useful bug report. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
RE: [GENERAL] Ideal hardware configuration for pgsql
Tomorrow I'd like to place an order for something more robust... looking into dual PIII, gig of ram and SCSI Raid. Planning on running Red Hat 7.1 on this machine. Maybe a AMD Athlon 1.33GHZ would be better.. It's a very fast CPU and I don't know if PostgreSQL runns faster on dual since I don't know if it can handle the load balancing? Well.. GIG of Ram is never bad... :) SCSI Raid should secure your data also. You should use IBM HDs. They never brake in a million years ;) Well and if.. You habe 5 years garantie... You're input is tremendously appreciated! Don't know if sun machines help in your case since I don't know suns ;) But I guess others could help you better if they knew some more details about the use of the server. greets ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Serial Type
Hi Chris, If you do an INSERT straight into the table, AND also make sure you give it the value you want in the serial column, it will accept it and not increment the serial column's counter. There are also a few function which are useful with the serial type : currval() setval() nextval() See http://odbc.postgresql.org/docs/index.php?sql-createsequence.html for more info. Regards and best wishes, Justin Clift Christian Marschalek wrote: The serial data type automaticaly takes the next higher value, but what if there's a drop and I want to assing 3 even if there are 20 records and the next higher number would be 21 not 3? Tia Chris ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] disk usage advice needed
On Wed, May 02, 2001 at 10:45:09AM -0400, Bruce Momjian wrote: I wish these two tables to live on two separately. I just wrote two articles, one on performance and the other on writing PostgreSQL applications. You can get them at: http://candle.pha.pa.us/main/writings/pgsql/ Thank you very much. Your article was precisely what I was looking for. It pointed out to me that I need this query select relfilenode,relname from pg_class where relname !~ '^pg'; Regards ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Update Triggers NULLs
I have a plpgsql update trigger on one of my tables. The intention of the trigger is to check to see if a particular field is updated, and if so INSERT a record into another table. The IF statement looks something like this: IF new.MyField old.MyField THEN This works great when MyField has an actual value and is being changed to a new value. The problem is the expression is false if new.MyField or old.MyField are NULL. Is there a way to do this without having to write three different boolean expressions? Greg ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] update ... from where id in (..) question
Hi, I have a table with approx. 2mln records. There were a few for which I had to update statusfield, so I did: update table set statusflag = 'U' where id in ('id10', 'id20', 'id30'); this took so long that I cancelled it, and used separate update table set statusflag = 'U' where id = 'id10'; statements, which were executed in a fraction of a second. Has someone an explanation for this ? -- Feite Brekeveld [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] Ideal hardware configuration for pgsql
At 7:28 +0200 5/2/2001, Christian Marschalek wrote: Maybe a AMD Athlon 1.33GHZ would be better.. It's a very fast CPU and I don't know if PostgreSQL runns faster on dual since I don't know if it can handle the load balancing? Well.. GIG of Ram is never bad... :) I would think that dual CPU's would help immensely due to the multiple postgres processes running simultaneously. -- Cafard, [EMAIL PROTECTED] qu'est-ce que tu penses? AIM:pr0j2501 Matt Kane's Brain http://mkb.n3.net ===jive turkey http://jive-turkey.n3.net=== ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] Ideal hardware configuration for pgsql
I would make sure that an intel box won't suit before looking at sun. Simply for cost and if you're planning to run linux on it sun support will be shit because they don't have skills in that area. Databases thrive on more spindles, separate system spindles from the db spindles and swap spindles, look at separating index tables from data tables and the WAL. Raid 3 or striping may be more suitable for the WAL (what happens if you loose the WAL?) whereas raid 5 or a combination for 1/5 for data and indexes. The chunk size on a raid set may also be worth pursuing as a means of squeezing better performance from a dedicated db machine. -- Ian Willis Systems Administrator Division of Entomology CSIRO GPO Box 1700 Canberra ACT 2601 ph 02 6246 4391 fax 02 6246 4000 -Original Message- From: Ryan Mahoney [mailto:[EMAIL PROTECTED]] Sent: Wednesday, 2 May 2001 8:35 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Ideal hardware configuration for pgsql Our db server running 7.1 got *torched* today, system ran between 30% an 80% CPU all day! Right now the server is running on a Penguin Computing 800mhz PIII w/ 128 ram and IDE hardware. Tomorrow I'd like to place an order for something more robust... looking into dual PIII, gig of ram and SCSI Raid. Planning on running Red Hat 7.1 on this machine. Before I order, I was wondering if anyone had any suggestions or recommendations. I have been considering getting a Sun machine... but I don't know if there is a benefit. Also, are there any special considerations when running RAID and dual CPU? You're input is tremendously appreciated! -r Ryan Mahoney CTO, Payment Alliance, Inc. [EMAIL PROTECTED] t. 718-721-0338 m. 718-490-5464 www.paymentalliance.net ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Re: Stranger than fiction - EXPLAIN results
I'm sorry that I did not send the EXPLAIN results along with my original email, I had no idea this command existed (I'm a first-week rookie with postgres!) Also, I did not know about vacuum, and apparently, that was the culprit! After vacuum'ing, times went from .433 sec/query to .001. Holy moly! I read up on vacuum in the Postgres book we have, and I guess since I've been doing so much testing, I've been really updating a LOT of rows in the table, because here's what the vacuum results looked like: 010502.11:13:46.469 [3029] StartTransactionCommand 010502.11:13:46.469 [3029] query: vacuum status; 010502.11:13:46.469 [3029] ProcessUtility: vacuum status; 010502.11:13:46.470 [3029] DEBUG: --Relation status-- 010502.11:13:46.978 [3029] DEBUG: Pages 5700: Changed 0, reaped 5700, Empty 0, New 0; Tup 224: Vac 364719, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 100, MaxLen 208; Re-using: Free/Avail. Space 45161576/45149008; EndEmpty/Avail. Pages 1/5698. CPU 0.25s/0.25u sec. 010502.11:13:46.980 [3029] DEBUG: Index status_5_column_index: Pages 4; Tuples 224: Deleted 0. CPU 0.01s/0.00u sec. 010502.11:13:50.641 [3029] DEBUG: Index status_site_id_key: Pages 920; Tuples 224: Deleted 364719. CPU 0.24s/3.36u sec. 010502.11:13:54.335 [3029] DEBUG: Index status_host_id_key: Pages 930; Tuples 224: Deleted 364719. CPU 0.17s/3.46u sec. 010502.11:13:55.252 [3029] DEBUG: Rel status: Pages: 5700 -- 4; Tuple(s) moved: 224. CPU 0.78s/0.14u sec. 010502.11:13:55.256 [3029] DEBUG: Index status_5_column_index: Pages 6; Tuples 224: Deleted 224. CPU 0.00s/0.00u sec. 010502.11:13:55.307 [3029] DEBUG: Index status_site_id_key: Pages 920; Tuples 224: Deleted 224. CPU 0.05s/0.00u sec. 010502.11:13:55.358 [3029] DEBUG: Index status_host_id_key: Pages 930; Tuples 224: Deleted 224. CPU 0.05s/0.00u sec. 010502.11:13:55.385 [3029] CommitTransactionCommand Can someone explain to me these results? What does it mean when it says Deleted 364719 of Index status_site_id_key, for example? And then later on it goes back to that index and deletes another 224. What is going on internally? Also, after vacuuming the table, I reran my select query 010502.11:22:57.579 [3029] StartTransactionCommand 010502.11:22:57.579 [3029] query: select * from status s where s.site_id = 18 and s.host_id = 49 and s.product = 'BETA' and s.class = 'APPS' and s.subclass = 'MONITOR' ; 010502.11:22:57.581 [3029] ProcessQuery 010502.11:22:57.582 [3029] CommitTransactionCommand As you can see, .001s execution time! However - EXPLAIN still shows: 010502.11:24:05.001 [3029] StartTransactionCommand 010502.11:24:05.002 [3029] query: explain select * from status s where s.site_id = 18 and s.host_id = 49 and s.product = 'BETA' and s.class = 'APPS' and s.subclass = 'MONITOR' ; 010502.11:24:05.002 [3029] ProcessUtility: explain select * from status s where s.site_id = 18 and s.host_id = 49 and s.product = 'BETA' and s.class = 'APPS' and s.subclass = 'MONITOR' ; 010502.11:24:05.003 [3029] NOTICE: QUERY PLAN: Seq Scan on status s (cost=0.00..9.04 rows=1 width=84) 010502.11:24:05.003 [3029] CommitTransactionCommand Which is incorrect because it should be using my index, shouldn't it? Here is how I created my index: monitoring=# create index status_5_column_index on status (site_id, host_id, product, class, subclass); And a desc. of the table shows the index is there: monitoring=# \d status Table status Attribute | Type| Modifier ---+---+-- site_id | bigint| not null host_id | bigint| not null product | varchar() | not null class | varchar() | not null subclass | varchar() | not null status| varchar() | not null msg | varchar() | tstamp| timestamp | Indices: status_5_column_index, status_host_id_key, status_site_id_key monitoring=# So it's puzzling me why it's still doing a Seq. Scan vs. an Index Scan. Any ideas? Thanks, Fran ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Serial Type
Christian Marschalek writes: The serial data type automaticaly takes the next higher value, but what if there's a drop and I want to assing 3 even if there are 20 records and the next higher number would be 21 not 3? Then you can't use the serial type. Coding hole-filling serial numbers can be quite tricky and slow. -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Re: Stranger than fiction - EXPLAIN results
Fran Fabrizio [EMAIL PROTECTED] writes: Seq Scan on status s (cost=0.00..9.04 rows=1 width=84) 010502.11:24:05.003 [3029] CommitTransactionCommand Which is incorrect because it should be using my index, shouldn't it? Here is how I created my index: If a table is small (you have ~200 rows in it, right?) a sequential scan will usually win even when there is an index. Enjoy your improved query time and be happy. ;) -Doug -- The rain man gave me two cures; he said jump right in, The first was Texas medicine--the second was just railroad gin, And like a fool I mixed them, and it strangled up my mind, Now people just get uglier, and I got no sense of time... --Dylan ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] RE: Joining more than 2 tables
On Wed, 2 May 2001, Jeff Meeks wrote: What I am looking for is a query that will return a list of id's with a sum from table b and a sum from table c like this: id namesum(b) sum(a) 1 shell34 50 2 jeff 40 20 Thanks Jeff Meeks [EMAIL PROTECTED] SELECT id, name, (SELECT sum(b) FROM b WHERE b.id=a.id) AS sum_b, (SELECT sum(c) FROM c WHERE c.id=a.id) AS sum_c FROM a; -- Joel Burton [EMAIL PROTECTED] Director of Information Systems, Support Center of Washington ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Re: Stranger than fiction...
This table currently has 224 rows of data in it. The following queries *ALL* take approx. .433 seconds to run. select * from status s where s.site_id = 18 and s.host_id = 49 and s.product = 'BETA' and s.class = 'APPS' and s.subclass = 'MONITOR' ; select * from status s where s.site_id = 18 and s.host_id = 49 and s.product = 'BETA'; select * from status s where s.site_id = 18 and s.host_id = 49; select * from status s where s.site_id = 18; select * from status; Nothing gets triggered on a select off of this table. This table did have a view attached, but I dropped the view with the same results. So, I'm totally baffled as to how a simple select * on a 224-row table could take almost half a second to execute! Please help the neophyte. =) Have you VACUUMed the table (with or without ANALYZE)? Beyond that, we really need the results of the EXPLAIN to see what the database is doing and how to improve on that. Greg ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
RE: [GENERAL] update ... from where id in (..) question
This FAQ Item 4.23: 4.23) Why are my subqueries using IN so slow? Currently, we join subqueries to outer queries by sequentially scanning the result of the subquery for each row of the outer query. A workaround is to replace IN with EXISTS: SELECT * FROM tab WHERE col1 IN (SELECT col2 FROM TAB2) to: SELECT * FROM tab WHERE EXISTS (SELECT col2 FROM TAB2 WHERE col1 = col2) We hope to fix this limitation in a future release. Mike Mascari [EMAIL PROTECTED] -Original Message- From: Feite Brekeveld [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, May 02, 2001 4:31 AM To: [EMAIL PROTECTED] Subject:[GENERAL] update ... from where id in (..) question Hi, I have a table with approx. 2mln records. There were a few for which I had to update statusfield, so I did: update table set statusflag = 'U' where id in ('id10', 'id20', 'id30'); this took so long that I cancelled it, and used separate update table set statusflag = 'U' where id = 'id10'; statements, which were executed in a fraction of a second. Has someone an explanation for this ? -- Feite Brekeveld [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Re: update ... from where id in (..) question
I have a table with approx. 2mln records. There were a few for which I had to update statusfield, so I did: update table set statusflag = 'U' where id in ('id10', 'id20', 'id30'); this took so long that I cancelled it, and used separate update table set statusflag = 'U' where id = 'id10'; statements, which were executed in a fraction of a second. Has someone an explanation for this ? http://www.postgresql.org/docs/faq-english.html#4.23 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] big pg 6.5 and 7.1 problem in simple application
We have a simple ad tracking application, which has a (mostly) fixed table size where each row represents a particular ad. We have about 70 rows in the database and use php scripts in apache which connect over odbc, read a single row, increment a counter, and update that row. We're performing about 30 updates a second and after a few minutes the postmaster either hangs or dumps core. We've tried this scenario on both pg 6.5 and 7.1 on redhat linux, from redhat's rpms and built from source with the same results. We launch 256 backends with a reasonable shared buffer size. We're using the unixodbc's odbc driver version 2.0.5. I don't think we're doing row locks for the query, but that shouldn't crash it - it should just give us bad data. The tables, selects, and update calls are all pretty simple, so I'm baffled by this behavior. Has anyone else seen this problem, or have a solution? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Update Triggers NULLs
Gregory Wood [EMAIL PROTECTED] writes: I have a plpgsql update trigger on one of my tables. The intention of the trigger is to check to see if a particular field is updated, and if so INSERT a record into another table. The IF statement looks something like this: IF new.MyField old.MyField THEN This works great when MyField has an actual value and is being changed to a new value. The problem is the expression is false if new.MyField or old.MyField are NULL. Is there a way to do this without having to write three different boolean expressions? No, you need to check for NULL explicitly using ISNULL. Is there some problem with using three expressions? -Doug -- The rain man gave me two cures; he said jump right in, The first was Texas medicine--the second was just railroad gin, And like a fool I mixed them, and it strangled up my mind, Now people just get uglier, and I got no sense of time... --Dylan ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] PL/Perl documentation ...
Is there available some more examples of writing functions in PL/Perl ? In standard documentation there is only just 2 examples. Thanks in advance for any help. Adam -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] DROP TABLE wildcard
Is it possible to drop multiple tables with SQL on the system tables: I tried this: DROP TABLE from pg_tables where tablename LIKE 'table_num_%'; Which for whatever reason would delete 0 items despite it should have matched on several. Of course I'm not even sure pg_tables would be the smart place to make the delete from? Thanks. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] big pg 6.5 and 7.1 problem in simple application
Aaron Brashears [EMAIL PROTECTED] writes: We have a simple ad tracking application, which has a (mostly) fixed table size where each row represents a particular ad. We have about 70 rows in the database and use php scripts in apache which connect over odbc, read a single row, increment a counter, and update that row. We're performing about 30 updates a second and after a few minutes the postmaster either hangs or dumps core. I'd try compiling 7.1 with debugging enabled, and do a GDB backtrace on your core dumps. Otherwise it's hard to help you. -Doug -- The rain man gave me two cures; he said jump right in, The first was Texas medicine--the second was just railroad gin, And like a fool I mixed them, and it strangled up my mind, Now people just get uglier, and I got no sense of time... --Dylan ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Cannot build PL/Perl ...
Cannot build PL/Perl because libperl is not a shared library. I'v got this error when I tried to compile PL/Perl. I have SuSE Linux 7.0 , have I get a libperl.so library , and put it in /usr /lib ... Thanks for any help Adam -- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] disk usage advice needed
Hello My data directory is like (pg 7.1 linux 2.4.4) --- $ ls PG_VERSION base/ global/ pg_hba.conf pg_ident.conf pg_xlog/ postgresql.conf postmaster.opts With 7.0.3 I remember?? that there are files named according to table names. With 7.1 I don't know which disk file is associated with which table. What I would like to do is spread the files between two ide disks in order to gain some performance. I heard somewhere that I can symlink some of the files to a different disk to gain some speed and I was wondering of which of those files are my tables. What advise would you give if I can split pg data between two disks. I think all of the data is in base/ directory and it looks like this 1/ 18719/ 18836/ 573216/ I have two currently two big tables that are constantly (1-5 times per second the whole day) being accessed and always simultaneously and I wish these two tables to live on two separately. I just wrote two articles, one on performance and the other on writing PostgreSQL applications. You can get them at: http://candle.pha.pa.us/main/writings/pgsql/ -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
RE: [GENERAL] Ideal hardware configuration for pgsql
We just bought a brand new Sun Netra X1. List pice from Sun was $995.00. Yes under one grand. It is a 1U tall box. For once Sun beats Intel prices. It comes with Solaris 8 preinstaled. Basically just plug in and boot. We got a discount to $907. We upgraded the RAM to 1GB (it uses PC133 RAM) We also added a second drive and do a two way mirror. I tested it by pulling the power cable from one drive while Postgres was running. It worked, no crash. The box is not super fast but usfull for many purposes. My test database has 1M rows by 40 columns. With the 1GB RAM perforance is just OK. I used a dual Xeon box (2MB L2 cache, 1GB RAM, SCSI 160) that was faster then the Sun Netera X1 but cost 6x more. My Ideal box would have multiple CPUs, at least SCSI 160 drives or better a hardware RAID box and 4GB RAM. -Original Message- From: Willis, Ian (Ento, Canberra) [mailto:[EMAIL PROTECTED]] Sent: Wednesday, May 02, 2001 12:03 AM To: 'Ryan Mahoney'; [EMAIL PROTECTED] Subject: RE: [GENERAL] Ideal hardware configuration for pgsql I would make sure that an intel box won't suit before looking at sun. Simply for cost and if you're planning to run linux on it sun support will be shit because they don't have skills in that area. Databases thrive on more spindles, separate system spindles from the db spindles and swap spindles, look at separating index tables from data tables and the WAL. Raid 3 or striping may be more suitable for the WAL (what happens if you loose the WAL?) whereas raid 5 or a combination for 1/5 for data and indexes. The chunk size on a raid set may also be worth pursuing as a means of squeezing better performance from a dedicated db machine. -- Ian Willis Systems Administrator Division of Entomology CSIRO GPO Box 1700 Canberra ACT 2601 ph 02 6246 4391 fax 02 6246 4000 -Original Message- From: Ryan Mahoney [mailto:[EMAIL PROTECTED]] Sent: Wednesday, 2 May 2001 8:35 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Ideal hardware configuration for pgsql Our db server running 7.1 got *torched* today, system ran between 30% an 80% CPU all day! Right now the server is running on a Penguin Computing 800mhz PIII w/ 128 ram and IDE hardware. Tomorrow I'd like to place an order for something more robust... looking into dual PIII, gig of ram and SCSI Raid. Planning on running Red Hat 7.1 on this machine. Before I order, I was wondering if anyone had any suggestions or recommendations. I have been considering getting a Sun machine... but I don't know if there is a benefit. Also, are there any special considerations when running RAID and dual CPU? You're input is tremendously appreciated! -r Ryan Mahoney CTO, Payment Alliance, Inc. [EMAIL PROTECTED] t. 718-721-0338 m. 718-490-5464 www.paymentalliance.net ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Security and performance
I'm writing the database backend to a web application. Being paranoid I want to limit the damage/exposure that the application can do. One way would be to create a database user for each application user (i.e. login name) and to create views for each user, not giving them any permissions on sensitive tables but only letting them see their own data through the views. How would that affect the database as the number of users climbs through the hundreds to the thousands? Would the thousands of views slow the database down? Is there an upper limit to the number of views? -- Bruce Bitterly it mathinketh me, that I spent mine wholle lyf in the lists against the ignorant. -- Roger Bacon, Doctor Mirabilis ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Strange SERIAL / Sequence error
I create a table with: CREATE TABLE tab (asdf SERIAL, asdf2 char (20) NOT NULL ); Then I try to insert some data: INSERT INTO tab (asdf, asdf2) VALUES (NEXTVAL('tab_asdf_seq'::text), 'asdf') ERROR: Relation 'tab_asdf_seq' does not exist (done with phpPgAdmin 2.3) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] group by is quite expensive
What can I do to speed up queries like the following select count(*) from ttt group by xxx; Using Postgres 7.1 on Solaris 8 the above query on a 1M row database takes 6 times longer than the following select count(*) from ttt; With Postgres group by is apparently quite expensive. Using Oracle and MySQL the same query and the same data, the first query takes only 25% longer. I think both of these DBMSes use a single sequential scan of the data while Postgresql (using the explain query) uses a multi pass process. Yes I did a vacuum analyze. Is there anything I as user can do. Build some kind of index? In absolut times group by is a killer. All querries on a single 1M row table that include group by take about 3 minutes. Oracle takes about 40 seconds and MySQL about 25 seconds. Here is what EXPLAIN shows. alberch=# explain select count(*) from tyc_main group by nphoto; NOTICE: QUERY PLAN: Aggregate (cost=170404.22..175695.88 rows=105833 width=2) - Group (cost=170404.22..173050.05 rows=1058332 width=2) - Sort (cost=170404.22..170404.22 rows=1058332 width=2) - Seq Scan on tyc_main (cost=0.00..49705.32 rows=1058332 width=2) EXPLAIN alberch=# explain select count(*) from tyc_main; NOTICE: QUERY PLAN: Aggregate (cost=52351.15..52351.15 rows=1 width=0) - Seq Scan on tyc_main (cost=0.00..49705.32 rows=1058332 width=0) EXPLAIN When I run the queries the last takes 18 sec, the first a little over 5 _minutes_ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Cannot build PL/Perl ...
At 09:59 PM 5/2/2001 +0200, you wrote: Cannot build PL/Perl because libperl is not a shared library. I'v got this error when I tried to compile PL/Perl. I have SuSE Linux 7.0 , have I get a libperl.so library , and put it in /usr /lib ... Thanks for any help Adam Hi Adam, I went through getting pl/perl working not too long ago - this part of the docs talks about what you need to do. http://www.ca.postgresql.org/users-lounge/docs/7.1/programmer/plperl.html By the way, if you intend to use perl functions with triggers - you should be warned now that you can't - its not supported with 7.0. Although I don't know about 7.1. Andy. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
RE: [GENERAL] Improve a query...
Try: SELECT DISTINCT ON (org_id) rpt_id, org_id, period,... FROM reports ORDER BY period DESC; -Original Message- From: Eric G. Miller [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, May 01, 2001 4:04 AM To: PostgreSQL General Subject: [GENERAL] Improve a query... Looking for the best way to formulate a query to select the most recent entry for an organization in a table like: CREATE TABLE reports ( -- Report Id used to link up related 1:M rather than multi-key rpt_id SERIAL NOT NULL PRIMARY KEY, -- A Unique ID for the organization org_id char(10) NOT NULL CHECK(CHARACTER_LENGTH(op_id) = 10), -- The reporting period period integer NOT NULL -- Various and Sundry ... . . . UNIQUE (org_id,period) ); If max(period) for an organization yields the most recent reporting, I want to get whatever is the most recent report for each organization. This query works, but seems expensive... SELECT a.rpt_id, a.org_id, a.period, ... FROM reports As a INNER JOIN (SELECT b.org_id, max(b.period) As period FROM reports b group by b.org_id) As c ON a.org_id = c.org_id and a.period = c.period; EXPLAIN looks thusly: NOTICE: QUERY PLAN: Merge Join (cost=147.98..164.48 rows=10 width=48) - Sort (cost=69.83..69.83 rows=1000 width=32) - Seq Scan on reports a (cost=0.00..20.00 rows=1000 width=32) - Sort (cost=78.15..78.15 rows=100 width=16) - Subquery Scan c (cost=69.83..74.83 rows=100 width=16) - Aggregate (cost=69.83..74.83 rows=100 width=16) - Group (cost=69.83..72.33 rows=1000 width=16) - Sort (cost=69.83..69.83 rows=1000 width=16) - Seq Scan on reports b (cost=0.00..20.00 rows=1000 width=16) The data is very hierarchical so I didn't want to carry around alot of key fields in related many-sided tables which may also have related many-sided tables. Any ideas on how to minimize the multiple scans on the table? The numbers for explain probably aren't telling much since there's not much real data in the table at this time... -- Eric G. Miller [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: best way to implement producer/consumer in Perl
On 2 May 2001, Vivek Khera wrote: I've got an application that creates work and posts the work requests to a table. Each work request is just a single row in the table. I've also got two systems that do the work based on the requests in the table. It seems that ideally, I could use the LISTEN/NOTIFY features to tell the consumers when there is work. However, what I really need is a blocking LISTEN. That is, it just sits there until it gets a NOTIFY. Right now, the consumer checks for work every so often, and most of the time finds none. What's the best way to implement such a thing in Perl? Is there a blocking LISTEN? I'm not a serious expert on such things, but, in Perl, LISTEN/NOTIFY is handled through DBI/DBD, so the LISTEN/NOTIFY notification will come only when a regular query is sent. Could you possibly do something like: while (1) { execute regular, simple query (eg SELECT 1;) check for notification, and handle it sleep for x minutes/seconds/msecs } It's still a loop w/a query, but at least the query should be extremely speedy and have low impact on DB performance. If you find a better way, please let me know. HTH, -- Joel Burton [EMAIL PROTECTED] Director of Information Systems, Support Center of Washington ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Re: SQL Where Like - Range it?!
On Mon, Apr 30, 2001 at 01:11:21AM -0400, Tom Lane wrote: Always remember that the cost estimates quoted by EXPLAIN are estimates, not reality. In this case the reason for the difference is that the planner doesn't have any detailed understanding of the semantics of bracket-expressions in regexps, so it doesn't realize that ^[AB] could usefully use an index. It wants to see ^ followed by at least one character of fixed pattern before it will think about an indexscan ... so the query planner doesn't use indexes for ~'^[A-F]' then? (meaning, such regex queries are all sequential scans...) or are you saying that just the explainer routines don't? (meaning, explain doesn't grok as well as the actual fetch- and-match routines do...) hmm? -- don't visit this page. it's bad for you. take my expert word for it. http://www.salon.com/people/col/pagl/2001/03/21/spring/index1.html [EMAIL PROTECTED] http://sourceforge.net/projects/newbiedoc -- we need your brain! http://www.dontUthink.com/ -- your brain needs us! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Re: DROP TABLE wildcard
On Wed, 2 May 2001, Andy Koch wrote: Is it possible to drop multiple tables with SQL on the system tables: I tried this: DROP TABLE from pg_tables where tablename LIKE 'table_num_%'; Which for whatever reason would delete 0 items despite it should have matched on several. Of course I'm not even sure pg_tables would be the smart place to make the delete from? First of all, DROP TABLE doesn't use SELECT SQL syntax. There is no DROP TABLE ... FROM ... Second, pg_tables is a *view* of pg_class. If you want to make changes, make them to pg_class. That said, though, I don't think you want to drop tables by just deleting the rows from pg_class. You might want to hear what the hackers have to say about the subject, but I'm assuming its a Bad Idea. No, there is no wildcharacter in the DROP TABLE syntax. You could, though, make a plpgsql function that dropped tables, and took a text parameter that it used as a regex. Then you could SELECT dev_drop_table('tblfoo.*'). (or, instead of regex-able param, do a like-able param) -- Joel Burton [EMAIL PROTECTED] Director of Information Systems, Support Center of Washington ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Unique or Primary Key?
On Wed, May 02, 2001 at 06:50:09PM -0700, some SMTP stream spewed forth: On Thu, May 03, 2001 at 12:58:03AM +0100, [EMAIL PROTECTED] wrote: This table is man-in-the-middle of a many-to-many relationship: CREATE TABLE cv_entries ( subscriber INTEGER NOT NULL REFERENCES subscribers ON DELETE CASCADE ON UPDATE CASCADE, entry_type INTEGER NOT NULL REFERENCES cv_entry_types ON DELETE CASCADE ON UPDATE CASCADE, ordinal INTEGER, value1 TEXT, value2 TEXT, minimum_trust SMALLINT, UNIQUE(subscriber, entry_type, ordinal) ); I used a unique index here because I couldn't see any reason for a Primary Key - this table will always be searched on either the subscriber or entry_type index. Was I wrong? Should this be a Primary Key? I think it's a distinction without a difference. A primary key is just a way to identify a unique tuple that's been chosen from a possible set of candidate keys (often there's only one candidate). And, primary keys are enforced with a unique index... Just to expand on Eric's response, a Primary Key directive creates a unique not null column, whereas a unique column can be null. In this instance, your columns are already not null, so, as Eric responded, it is basically the same thing. gh -- Eric G. Miller [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] Ideal hardware configuration for pgsql
I think that all this fat should be put on the fire. A nice performance test on the same high end hardware would be good. Is there a test suite that would suit? Would anyone expect more than a 5% difference in performance between the OS's even using the dreaded ext2 and not the reiserfs or SGI XFS. There could there be wagers between the loudest in both camps? A 5% betting premium could apply with all proceeds going to the postgresl development team :) My preference for using linux is that I like the licence and spirit of linux more and assuming that the performance difference is negligable I'll stick with it. Similiarly many find that the BSD licence and associated community's stricter development methodologies appeals more and they too will stick with that whilst there is a negligable performance difference. But realistly after using both I find that you can make one choke while the other sings if you chose your test carefully enough and currently they get similiar performance results on most general application tests. -- Ian Willis -Original Message- From: GH [mailto:[EMAIL PROTECTED]] Sent: Thursday, 3 May 2001 10:53 AM To: Ryan Mahoney Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] Ideal hardware configuration for pgsql On Wed, May 02, 2001 at 08:07:04PM +0100, some SMTP stream spewed forth: I only have experience with Red Hat, Solaris 8 (intel), and LinuxPPC. What do you see as the downside of running Red Hat? My intention is to run RH 7.1, although I can surely be swayed if you can offer some compelling FreeBSD benefits. FreeBSD is out-of-the-box more secure, more stable, and generally more enjoyable to work with than RedHat. If you had ever seen the power and beauty of FreeBSD, you would not continue using RedHat by choice. You probably need to see it to believe it. I encourage you to check it out sometime, but you should have no problem at all running PostgreSQL on RedHat. I'm out. gh BTW, the input on hardware was very useful. I ordered a Dell today w/ gig ram, dual 1ghz PIII and Raid 1 18gig scsi hard drives. I'm excited! -r At 06:52 PM 5/2/01 -0500, GH wrote: On Tue, May 01, 2001 at 11:35:13PM +0100, some SMTP stream spewed forth: Our db server running 7.1 got *torched* today, system ran between 30% an 80% CPU all day! Right now the server is running on a Penguin Computing 800mhz PIII w/ 128 ram and IDE hardware. Tomorrow I'd like to place an order for something more robust... looking into dual PIII, gig of ram and SCSI Raid. Planning on running Red Hat 7.1 on this machine. I think that anyone whose opinion matters would recommend running something *other* than RedHat. FreeBSD is an excellent operating system and is well suited to a PostgreSQL environment. gh *snip* You're input is tremendously appreciated! -r Ryan Mahoney CTO, Payment Alliance, Inc. [EMAIL PROTECTED] t. 718-721-0338 m. 718-490-5464 www.paymentalliance.net --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.250 / Virus Database: 123 - Release Date: 4/18/01 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.250 / Virus Database: 123 - Release Date: 4/18/01 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.250 / Virus Database: 123 - Release Date: 4/18/01 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Ideal hardware configuration for pgsql
Unfortunately there aren't any great java ports for FreeBSD. Check out the linux compatibility java support linux-jdk13 I've found it to be about 95% as fast as something running under native linux, but I get the perk of BSDs memory management and I can typically run 1.4 times the apps/processes under BSD (linux dies). With that, I gave up on the extra 5% speed and went for the higher load and haven't looked back. -sc Linux also enjoys the attention of many bigger players such as IBM, Compaq. Let it... I'd rather have everyone focus on Linux so long as the emmulation continues to be quick. If IBM tried to release stuff for both Linux and BSD, it'd tack another week-month onto their development time. Eventually they'll wisen up, but for now, emmulation's paved my way to gold. -sc Dave PGP signature ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Tuple Max Size on 7.1
Sorry. Perl drivers are Okay. Our server was moved and pointing so I was pointing to an old version. After digging more into the CF problem it does appear to be the drivers. I changed the odbc.ini file to point to the new drivers that come with the distribution but CF still seems to try and use the old version. I renamed the old libpgsqlodbc.so and then it quit working so it must have still been using it. CF bundles the odbc drivers with it's release so I guess I need to move things around to get it working. CF bundled unixODBC postgresql drivers with their distribution. But from what I've read the odbc drivers from the pgsql 7.1 should also work. Am I going to need to install the unixODBC drivers instead? Thanks, David Tom Lane wrote: Doug McNaught [EMAIL PROTECTED] writes: It's possible that the client drivers you're using weren't updated to reflect the unlimited tuple size in 7.1. No; the error he's quoting is an internal backend failure, so it's not a client-side problem. It looks to me like TOAST is failing to get the job done on compacting an oversize tuple, which is why I wanted to know more about the table in question. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Perl Scope problem
I'm baffled by perl's scoping of variables. In the code below, the $cust_data hash ref is inited outside the while loop. It's then set in the while with the results of a PgSQL query. In the if-else statement $cust_data can be seen in the 'if' but not in the 'else' (if I try to print a value in else, $cust_data-{'customer'}, I get an undeclared variable error). I understand that by using 'strict' I can't use any global variables. Can someone explain why this happens and how to make it work right? use strict; $cust_data = {}; while ($condition) { ... $cust_data = get_cust_data(); if ($condition2) { if (send_mail($cust_data)) { print $cust_data-{'customer'}; ... } else { if (send_mail($cust_data)) { print $cust_data-{'customer'}; ... } } -- Randy Perry sysTame Mac Consulting/Sales ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html