Re: [GENERAL] question regarding cursors in plpgsql, declare and bound params (v3 protocol)
Igor Shevchenko <[EMAIL PROTECTED]> writes: > -- all other queries are sent using PQsendQueryParams > -- the bound parameter for $1 is sent with this query > declare search_messages_cursor no scroll cursor for select > folder_msg.message_id from folder_msg where folder_msg.folder_id=7866 and > folder_msg.mail_subj ilike $1; Hm. That is not going to work, since the bound parameter only lasts for the execution of the query (ie, the DECLARE). Possibly someone could look into saving the parameters presented during DECLARE along with the state of the cursor, but don't hold your breath ... regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Listing databases
Doug Hall wrote: Sorry for the newbie question, but how do you get PostgreSQL to list the available databases? I know how to log into a certain database, but not list all of them. I know this must be possible because the phppgAdmin web site demonstrates it with their trial server. Thanks, Doug Hall Try \? in psql, it will show some really nice commands. \d will show you the tables, and "\d tablename" will describe the table. Try \dS and friends to see the system tables. One of the best ways to see what is going on under the covers to to run psql --echo-hidden: --echo-hidden Echoes the actual queries generated by \d and other backslash commands. You can use this if you wish to include similar functionality into your own pro grams. This is equivalent to setting the variable ECHO_HIDDEN from within psql. Whoever thought to add this to psql is a genius!!! HTH... Mark -- Mark Harrison Pixar Animation Studios ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Listing databases
If you attach with psql then just use \l Or this works as well SELECT datname from pg_database; > Sorry for the newbie question, but how do you get PostgreSQL to list > the available databases? I know how to log into a certain database, but > not list all of them. I know this must be possible because the > phppgAdmin web site demonstrates it with their trial server. > > Thanks, > Doug Hall > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Listing databases
> Sorry for the newbie question, but how do you get PostgreSQL to list > the available databases? I know how to log into a certain database, but > not list all of them. I know this must be possible because the > phppgAdmin web site demonstrates it with their trial server. Use the \l command within psql. -- Mike Nolan ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] sql standard: \' or ''
I heard that the sql standard for escaping quotes is '', and not \'. Is this true? BTW, PHP's pg_escape_string returns \' and not ''. __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Listing databases
Doug Hall wrote: Sorry for the newbie question, but how do you get PostgreSQL to list the available databases? I know how to log into a certain database, but not list all of them. I know this must be possible because the phppgAdmin web site demonstrates it with their trial server. select * from pg_database; Thanks, Doug Hall ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL begin:vcard fn:Joshua D. Drake n:Drake;Joshua D. org:Command Prompt, Inc. adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA email;internet:[EMAIL PROTECTED] title:Consultant tel;work:503-667-4564 tel;fax:503-210-0034 note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl. x-mozilla-html:FALSE url:http://www.commandprompt.com/ version:2.1 end:vcard ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Listing databases
Sorry for the newbie question, but how do you get PostgreSQL to list the available databases? I know how to log into a certain database, but not list all of them. I know this must be possible because the phppgAdmin web site demonstrates it with their trial server. Thanks, Doug Hall ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] postgres mirroring (real-time)
On Wed, May 05, 2004 at 03:17:02PM +, Sally Sally wrote: > Does anyone know of a good way of implementing postgres real-time > mirroring? Whether it's a program that's already out there or whether it is > something that can be done by the DBA? I know there is one in contrib > called dbmirror but does it actually work well (it seems to also require > pgperl) Plenty of people are using it. It's not _exactly_ real time, though. What do you mean by "real-time mirroring"? If you are not yet in production and are looking for something that'll be ready in the next six months, you might want to check out the Slony-I project on gborg.postgresql.org. But even that is not "real time", in that there is a gap between when a tuple is COMMITTED on the primary node and when it appears on other nodes. A -- Andrew Sullivan | [EMAIL PROTECTED] The plural of anecdote is not data. --Roger Brinner ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] 7.4.2 Regression tests: test stats loops indefinately...
On Wed, 05 May 2004 13:30:19 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: >Yup, that's a gotcha. Patch applied to 7.4 and HEAD branches. Hey, that was a -r diff. It was not intended for you :-) I had planned to send a -c diff to -patches, as soon as I have a suitable regression test. I thought of something like SELECT timeofday()::timestamp > '2004-05-05'::date; in horology. Servus Manfred ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Embedded SQL inherently faster than others?
Dann Corbit snip: -Original Message- From: Kevin [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 04, 2004 10:18 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Embedded SQL inherently faster than others? Hello, I'm looking for absolute maximum database result speed and if that means writing something in C and embedded SQL, versus Perl using DBI or whatever...that's fine. Benchmark your application. 90% of the time will be in ten percent of... And if you don't have a method for reusing connections or doing some connection pooling then for short running sql, much of the time will be spent on establishing a connection. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Embedded SQL inherently faster than others?
> -Original Message- > From: Rob [mailto:[EMAIL PROTECTED] > Sent: Wednesday, May 05, 2004 12:25 PM > To: [EMAIL PROTECTED] > Subject: Re: [GENERAL] Embedded SQL inherently faster than others? > > > Dann Corbit snip: > >>-Original Message- > >>From: Kevin [mailto:[EMAIL PROTECTED] > >>Sent: Tuesday, May 04, 2004 10:18 AM > >>To: [EMAIL PROTECTED] > >>Subject: [GENERAL] Embedded SQL inherently faster than others? > >> > >> > >>Hello, > >> > > > >>I'm looking for absolute maximum database result speed and if > >>that means writing something in C and embedded SQL, versus > >>Perl using DBI or whatever...that's fine. > > > > > > Benchmark your application. 90% of the time will be in ten percent > > of... > > And if you don't have a method for reusing connections or > doing some connection pooling then for short running sql, > much of the time will be spent on establishing a connection. Connection time is never a problem for me. So (as usual) YMMV. Besides connection pooling, other obvious answers would include reusing the existing connections for queries. Consider a web server. If the people who are attaching do not need special rights to data (e.g. all users of the application are at the same level) then the web server can launch a collection of database engines and maintain a thread pool for attaching users that compete for those resources. There is always a way around performance problems. And why all the sudden concern about connection times? Are lots of people actually having a problem with it or is it some angels dancing on the heads of pins kind of a thing? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Using Postgres from WSAD
On Mon, 3 May 2004, Stonebrook, Jeff wrote: > Has anyone had success loading the Postgres JDBC drivers into WSAD ? > > Whenever I attempt a connection I receive the following error: > [Reason: Problems encountered while loading the jdbc driver. Reason: javax.net.ssl.SSLSocket ] Please do not send images to the list when two lines of text will suffice. You have not specified what JVM or JDBC driver version you are using but my guess is that you have selected a JDBC driver that was compiled with support for SSL while you are not providing the required SSL classes in your runtime environment. Also what is WSAD? Kris Jurka ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
Lonni Friedman <[EMAIL PROTECTED]> writes: > hrmmm, i'm not sure what would constitute 'off the beaten track'. Neither am I ... if we knew what you were doing that triggers the bug, we'd already be halfway there :-( regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
On Wed, 05 May 2004 13:56:41 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: > Lonni Friedman <[EMAIL PROTECTED]> writes: > > On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: > >> Once the complaint starts appearing, I'd expect it to continue until you > >> reindex the index. > > > That's exactly what happens. It consistantly errors until reindexed. > > Any suggestions? thanks. > > You are seemingly triggering some sort of bug in the backend's > large-object code, causing extra index entries to be made ... > but I sure haven't the foggiest idea how that would happen. > > Perhaps you could look at the parts of your application code that work > with large objects and see if you are doing anything "off the beaten > track" that might suggest a way to trigger the bug reproducibly. > I'm sure we could fix it if we could see it happening. hrmmm, i'm not sure what would constitute 'off the beaten track'. is there something specific that i could look for? or what is considered to be normal? I can tell you that the large objects that are getting chucked into the database are generally under 1MB in size (each), as they're mostly M$ documents. There are a few that are between 1MB & 10MB but I don't think there's anything larger than 10MB. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Using Postgres from WSAD
Hello, I am not personally a java fiend but it looks like you are trying to use the PostgreSQL JDBC driver with SSL. I am unsure if that is possible with the community version. At least it didn't used to be. Sincerely, Joshua D. Drake Stonebrook, Jeff wrote: --> Has anyone had success loading the Postgres JDBC drivers into WSAD ? Whenever I attempt a connection I receive the following error: Jeffrey Stonebrook -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL begin:vcard fn:Joshua D. Drake n:Drake;Joshua D. org:Command Prompt, Inc. adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA email;internet:[EMAIL PROTECTED] title:Consultant tel;work:503-667-4564 tel;fax:503-210-0034 note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl. x-mozilla-html:FALSE url:http://www.commandprompt.com/ version:2.1 end:vcard ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] 7.4.2 Regression tests: test stats loops indefinately...
Manfred Koizar <[EMAIL PROTECTED]> writes: > I had planned to send a -c diff to -patches, as soon as I have a > suitable regression test. Like what? The existing stats test seems to catch this just fine on machines where the types are actually different sizes. In any case, I don't necessary believe in memorializing every single bug we ever find with its own regression case ... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] 7.4.2 Regression tests: test stats loops indefinately...
Manfred Koizar <[EMAIL PROTECTED]> writes: >>> Vik, I guess that on your platform time_t is smaller than long. > Ok, this is pretty clear now. Apply this patch and recompile Postgres. Yup, that's a gotcha. Patch applied to 7.4 and HEAD branches. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] question regarding cursors in plpgsql, declare and bound params (v3 protocol)
Hi. I've got a problem with using cursor in a plpgsql function. Cursor is created via DECLARE, it's SELECT query has placeholders. I use PostgreSQL 7.4.2. Here's a problematic plpgsql function: create or replace function add_messages_to_folder(integer,refcursor) returns integer security definer as ' declare v_folder_id alias for $1; v_c alias for $2; v_message_id integer; v_added_nr integer := 0; begin LOOP FETCH v_c INTO v_message_id; EXIT WHEN NOT FOUND; SELECT INTO v_added_nr v_added_nr+add_message_to_folder(v_folder_id,v_message_id,false); END LOOP; UPDATE folder set obj_nr=obj_nr+v_added_nr where id=v_folder_id; CLOSE v_c; RETURN v_added_nr; END; ' language 'plpgsql'; It gets an opened cursor as it's 2nd parameter (type=refcursor) and tries to read all data out of it via FETCH, but fails with an error message. Here's an example of how it comes to an error. Commands are executed from my client app via libpq with extended protocol: -- all other queries are sent using PQsendQueryParams begin work; -- the bound parameter for $1 is sent with this query declare search_messages_cursor no scroll cursor for select folder_msg.message_id from folder_msg where folder_msg.folder_id=7866 and folder_msg.mail_subj ilike $1; select add_messages_to_folder(7871,'search_messages_cursor'::refcursor); pg fails on this query with: PGRES_FATAL_ERROR: ERROR: no value found for parameter 1 CONTEXT: PL/pgSQL function "add_messages_to_folder" line 9 at fetch I've tried to send this query with and without bound parameter, with no luck (same error message). Can someone shed some light on this ? :) -- Best regards, Igor Shevchenko ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: > > Lonni Friedman <[EMAIL PROTECTED]> writes: > > Unfortunately, i have no clue how to replicate this. It was happening > > fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly > > every vacuumdb run). > > > Then nothing for a month after going to 7.3.4, and now its happening > > every vacuumdb run again. > > Once the complaint starts appearing, I'd expect it to continue until you > reindex the index. That's exactly what happens. It consistantly errors until reindexed. Any suggestions? thanks. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Cache lookup failure for pg_restore?
[EMAIL PROTECTED] (Razvan Surdulescu) writes: > cspan=> update contact set gender = 'M'; > ERROR: cache lookup failed for function 70529 > If I re-run the update, I get: > cspan=> update contact set gender = 'M'; > server closed the connection unexpectedly Hm, could we see the full schema for the "contact" table? > The database has a few GIST (full-search, tsearch2) indices, but > otherwise is pretty ordinary. Any of those on "contact"? What would be useful to look at is a debugger stack trace from the core dump... Also it might be worth your time to update to 7.4.2, just to see if this is an already-fixed bug. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Load Balancing and Backup
On Fri, 30 Apr 2004, CS Wagner wrote: > I assume this issue has been solved repeatedly, but I cannot find any > information on it... > > I have a website that runs reports on data from a postgres database. An > average report takes about 10 minutes. I'd like to load balance it so I > can have multiple people run reports without causing the login request > to take forever. Also, it is rather important that data updated in the > database be updated on all mirrors of it immediately - not with an > hourly or daily update. > > What I would most desire is a program that pretends to be a postgres > server. I can log into it with psql, a jdbc driver, or php's > pg_connect. When I do a select (no update) command, it will send that > off to the least loaded mirror. When I do an update/insert/delete, it > hits all the mirrors. Also, it can designate one database (most likely > local to this pretend server) as the master so I can easily clone it to > make more mirrors. > > Does anything remotely similar to that exist? http://sqlrelay.sourceforge.net/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
Lonni Friedman <[EMAIL PROTECTED]> writes: > On Wed, 05 May 2004 12:31:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: >> Once the complaint starts appearing, I'd expect it to continue until you >> reindex the index. > That's exactly what happens. It consistantly errors until reindexed. > Any suggestions? thanks. You are seemingly triggering some sort of bug in the backend's large-object code, causing extra index entries to be made ... but I sure haven't the foggiest idea how that would happen. Perhaps you could look at the parts of your application code that work with large objects and see if you are doing anything "off the beaten track" that might suggest a way to trigger the bug reproducibly. I'm sure we could fix it if we could see it happening. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Verhindern, dass im Mehrbenutzerbetrieb mit veralteten Daten gearbteitet wird
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bastian, die liste [EMAIL PROTECTED] is 99% in englisch. Deine Anfrage hat wesentlich mehr Aussicht auf Erfolg wenn die sie in englisch stellst. Zum Problem: kannst du die Datensätze nicht eindeutig identifizieren ? Um erfolgreich einen Lock zu setzen muss die Tabelle einen primary key haben, der dann eindeutig ist. Damit hättest du zumindest nicht das Problem daß ein Datensatz gelöscht wird der "zwar die gewünschte Nr hat, aber nicht mehr der Datensatz ist". Du kannst explizite rowlocks setzen, aber wie gesagt es ist einfacher die rows eindeutig zu identifizieren, dann kann der Benutzer mit der veralteten Ansicht zwar noch löschen, der delete geht dann aber ins leere. Nur ein update auf den veralteten datensatz wird einen fehler erzeugen, den deine Applikation dann mit entsprechender fehlermeldung abfangen muss. UC On Tuesday 04 May 2004 01:50 am, Bastian wrote: > Hi, > > ich benutze PHP und PostgreSQL. > Folgendes Problem: Eine Seite zeigt die Daten, die in einer Tabelle > der DB abgespeichert sind. Der Benutzer wählt dann einen Datensatz > aus, den er gerne bearbeiten oder löschen möchte. Auf der nächsten > Seite wird die Aktion dann ausgeführt. > Es ist möglich LOCKS zu setzen, um zu verhindern, dass sich 2 DELETES > bzw. UPDATES in die Quere kommen, bzw. werden implizit gesetzt. Aber > wenn ein Benutzer sich auf Seite 1 also auf der Tabellenabfrageseite > befindet, und ein anderer Benutzer während dessen die Daten in der > Tabelle verändert, so arbeitet Benutzer 1 mit den alten Daten und > löscht dann z.B. einen Datensatz der zwar die gewünschte Nr hat, aber > nicht mehr der Datensatz ist, den er eigentlich löschen wollte. > Ist es vielleicht möglich per Abfrage zu prüfen, ob gerade ein LOCK > gesetzt ist? > > Bastian > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] - -- UC - -- Open Source Solutions 4U, LLC 2570 Fleetwood Drive Phone: +1 650 872 2425 San Bruno, CA 94066 Cell: +1 650 302 2405 United States Fax:+1 650 872 2417 -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAmSumjqGXBvRToM4RAoJTAJ42xi25CzUpgnbjUrEJutTCF9+OxQCbBzSh BxpIwG8QEsIPxQUp39U5Fa8= =7BB1 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Embedded SQL inherently faster than others?
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote: > DB2, as far as I know, uses some precompiled SQL files which may make a > difference if used or not used. Is it much of one if you're using prepared statements? I guess it depends on how many different queries you do. Stephen signature.asc Description: Digital signature
Re: [GENERAL] Embedded SQL inherently faster than others?
> -Original Message- > From: Kevin [mailto:[EMAIL PROTECTED] > Sent: Tuesday, May 04, 2004 10:18 AM > To: [EMAIL PROTECTED] > Subject: [GENERAL] Embedded SQL inherently faster than others? > > > Hello, > > I've read when using embedded sql with an Oracle database, > it's faster than other methods of database interaction, due > to the way it interacts with the Oracle DB. Generalizations like this are never totally true. Certainly it can be a very fast way. > I've been trying to find out if this is the case for postgres as well. > > I'm talking specifically about the parts of the program > interacting wtih the database, I know that obviously a C > program is faster than most other programs, in other respects. This is also not true. The dominant time absorbed by a program is a function of the algorithms employed. A java program that is O(n) will stomp the stuffing's out of an assembly program that is O(N^2). C will usually be faster than interpreted languages, but only by a small constant factor. The speed difference between C, C++ and Fortran (all compiled languages) is negligible. If you use some of the advanced object techniques like RTTI in C++ then you will have a small speed penalty for the benefit of improved abstraction abilities. > I'm looking for absolute maximum database result speed and if > that means writing something in C and embedded SQL, versus > Perl using DBI or whatever...that's fine. Benchmark your application. 90% of the time will be in ten percent of the code. Chances are good that adding an index or optimizing a nested loop or some simple trick will be the thing that makes your program monumentally faster. It may be that you write 95% of your application in Perl and 5% as a compiled Embedded SQL program. But premature optimization is a root of many evil things. If you want your PostgreSQL application to be fast, there are a few things you should consider. HARDWARE: First, you can buy very fast machines for a very reasonable price. Price out an Athlon 64 bit system with a striped raid array and several gigs of ram. Let's suppose that it is $6000 when you have all the bells and whistles attached. That sounds like a lot, but look at how few hours will be needed to cover the cost, if we billed at only $50/hour (which is a ridiculously low consulting rate). 120 hours is only 3 weeks time. The standard calculation for programming rates for debugged, documented and delivered code is ten debugged lines per hour. (That includes everything from the meetings and requirements gathering all the way through documentation and training). Any way, that means the machine costs the same as 1200 lines of code. People who buy supercomputers aren't so dumb after all. SYSTEM TUNING: Often, you can improve a system quite a bit by tuning it. Read articles like these: http://www.phpbuilder.com/columns/smith20010821.php3?print_mode=1 http://www.argudo.org/postgresql/soft-tuning.php http://cbbrowne.com/info/postgresql.html http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html http://www.linuxjournal.com/article.php?sid=4791 http://candle.pha.pa.us/main/writings/pgsql/performance.pdf http://forums.devshed.com/t84334/s.html http://techdocs.postgresql.org/techdocs/pgsqldbtuning.php OPTIMIZING PROCEDURES: Now, let's suppose that you have bought the hardware and tuned your database and your application is not fast enough. Here is what you should do: Run gprof or some other profiler on each part of the system to find out where the time is going. When you have found the hot spots, then consider optimizing those only. You should have definite performance goals. Do not optimize merely for the sake of optimization. It tends to obfuscate code when done poorly and 80% of software cost is maintenance. So in performing optimization, it is possible that you are creating a system that will be more expensive to maintain. To optimize, here are the correct steps: 1. Profile to find out what is slow 2. Look for alternative algorithms. Little tweaky things like compiled code verses interpreted or rewriting a recursive algorithm into the same algorithm with iteration only result in a small constant speedup. A fundamental algorithm change (or something as simple as creating a unique index) can result in monumental speedups of factors of hundreds or thousands. 3. Once the problem is clearly identified, get the input from some of your peers on it. Pair programming is one of the really good ideas that has come out of extreme programming camps. 4. Benchmark your proposed changes to see if you really do get the level of speedup that you hope. If you reach your written performance goal, then STOP. It's kind of fun to make things go faster but it is easy to get carried away. 5. If it is still not fast enough, examine some alternatives. What about a 4 CPU AMD 64 bit machine with 16 gigs of ram? What about 15K RPM Ultra320 SCSI drives in a striped array? What about hiring an
Re: [GENERAL] Is there a newer version of DBD::Pg?
> "GSM" == Greg Sabino Mullane <[EMAIL PROTECTED]> writes: >> I use perl scripts to import data into my db. When errors occurred >> uploading files in Pg ver 7.3.x, $DBI::errstr used to return a row number >> from the input file. I could then go to the file make whatever changes and GSM> 1.32 is the latest version. If you could provide a sample script that GSM> shows the error you are getting, that would be great. Also, you may be GSM> interested in the DBD::Pg mailing list. Information on the list can GSM> be found at: My experience has been that the 1.3x line of DBD::Pg is just plain broken. On one server running under mod_perl, all placeholder substituted values were NULL by the time they reached postgres. I've had other issues with the quote() method returning undef. Unfortunately those bugs were "not reproduced" even with the full test cases by the devlopment team. For my money I'm sticking with DBD::Pg 1.22 since it is completely stable for me. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-301-869-4449 x806 AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
Its _always_ that same index. No others have had this problem. Unfortunately, i have no clue how to replicate this. It was happening fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly every vacuumdb run). Then nothing for a month after going to 7.3.4, and now its happening every vacuumdb run again. On Wed, 05 May 2004 11:30:33 -0400, Tom Lane <[EMAIL PROTECTED]> wrote: > > Lonni Friedman <[EMAIL PROTECTED]> writes: > > All of a sudden last month (after about 3 years) I started getting > > this warning when vacuumdb was run: > > > INFO: Index pg_largeobject_loid_pn_index: Pages 903; Tuples 323847: > > Deleted 0.CPU 0.04s/0.07u sec elapsed 0.10 sec. > > WARNING: Index pg_largeobject_loid_pn_index: NUMBER OF INDEX' TUPLES > > (323847) IS NOT THE SAME AS HEAP' (323802). > > Recreate the index. > > Hmm. Is it always that same index, or have you seen this on other indexes? > > I'm not aware of any known bug in 7.3.* that could lead to this sort of > error. Don't suppose you have any suggestions about how to reproduce > the problem? > > regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Date addition using Interval
Sumita Biswas wrote: Hi Experts, I try to write the following in my Function and it works: ld_FromDateTemp := ld_FromDate + ''5 DAYS''::interval; But when I have to pass a parameter in the NUMBER of days(instead of 5) like 'li_NoOfDays' ld_FromDateTemp := ld_FromDate + ''li_NoOfDays DAYS''::interval; You want ... + (li_NoOfDays || '' DAYS'')::interval; plpgsql doesn't interpolate variables like perl/php do. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PERFORM function in Plpgsql
On Apr 29, 2004, at 9:22 AM, Yannick LOUVET wrote: hello, i want to call a function (function1() for example )in an other function. simple,i guess : PERFORM function1() ; You will want to use EXECUTE instead (assuming var_function is a text variable): EXECUTE var_function; You may need to add parenthesis and a select: EXECUTE 'select ' || var_function || '()'; . it's ok, but the name of my function is a variable like : . var_function := function1(); . PERFORM var_function; and it doesn't work !!! Is it possible to do that ? thanks version : PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.1 (Mandrake Linux 9.2 3.3.1-1mdk) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
Lonni Friedman <[EMAIL PROTECTED]> writes: > Unfortunately, i have no clue how to replicate this. It was happening > fairly consistantly before i upgraded from 7.3.3 to 7.3.4 (like nearly > every vacuumdb run). > Then nothing for a month after going to 7.3.4, and now its happening > every vacuumdb run again. Once the complaint starts appearing, I'd expect it to continue until you reindex the index. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] list types
How can i get a list of all types that dont have the "_"before them. I only want a list of the types that a user might need to create tables. thanks! -- Alexander Cohen http://www.toomuchspace.com (819) 348-9237 (819) 432-3443 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Embedded SQL inherently faster than others?
> -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Kevin > Sent: 4. maí 2004 17:18 > To: [EMAIL PROTECTED] > Subject: [GENERAL] Embedded SQL inherently faster than others? > > > Hello, > > I've read when using embedded sql with an Oracle database, > it's faster than other methods of database interaction, due > to the way it interacts with the Oracle DB. > Is it? If we are talking about embeded SQL into the C/C++ then I think it is tranlslated into OCI(Oracle Call Interface) calls which are the same API used by not embeded SQL. May be the efficiancy comes in more efficient OCI use when automatically generated than manually written OCI calls, for example, less parse calls? If this is SQLJ(Java) embeded SQL then it is cleary translated into normal JDBC calls - again, the no "other way" to interact the database. If this is PL/SQL stored procedures then yes, appearently the interaction bypasses the whole network layer and probably much more. DB2, as far as I know, uses some precompiled SQL files which may make a difference if used or not used. Oracle is using so called "shared pool" memory region for this reason - it's where Oracle stores compiled SQL. Client API submits SQL strings just literally, db engine searches them in the shared pool (the performance of this search is a big issue for Oracle) - if not found then it makes a big deal(time) to compile. This said we see that what ever Oracle API is used: SQL strings goes to db engine from the client API. Regards, Laimis ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] example database setup
I would like to set up a zope/database interface (using Postgres 7.4) with the following properties: - multiple users each with their own id and password - accessed via zope - they will enter and access their own data via zope to Postgresql - any specific user will not be able to see another user's data - the user will not manage his/her web-page I have Zope 2.6 and Postgres 7.4 set up and working with each other. Also, have a few ideas on how to do this, but thought it would be easiest if there were some similar examples out there to look at. Any suggestions from anyone, on anything that might be similar to the above task? I'm sure this has been done many times. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] How to abort psql when backtick (shell) commands return non-zero?
Hello, I searched all through Google Groups, Google, and the Postgres docs, but to no avail. I hope someone can help me out here! I have a file that contains SQL, and some Postgres-specific slash commands. This file calls an external shell script via the backtick mechanism, and saves the value into a psql variable, for latter insertion into a table. It looks something like this: \set password '\'' `/dir/genPass user1` '\'' ... INSERT INTO TEST (password) VALUES (:password); However, in some error scenarios, the /dir/genPass application fails, and returns non-zero. However, in these cases, the stderr of the application just gets stored in the variable, and hence inserted into my table as the password. This is bad, and causes trouble latter... :) BTW, I launch my "sql script" like so: psql MY_DB < populatePasswords.sql Question: how can I cause my populatePasswords.sql script to fail with an error if the /dir/genPass command returns an error? I'm okay with any sort of failure, as long as populatePasswords.sql stops executing and never gets to the INSERT statement. It would also be nice if some sort of error would print to the user ... ideally the stderr from the shell command (genPass). Thank you for your time, David Roche ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Embedded SQL inherently faster than others?
Hello, I've read when using embedded sql with an Oracle database, it's faster than other methods of database interaction, due to the way it interacts with the Oracle DB. I've been trying to find out if this is the case for postgres as well. I'm talking specifically about the parts of the program interacting wtih the database, I know that obviously a C program is faster than most other programs, in other respects. I'm looking for absolute maximum database result speed and if that means writing something in C and embedded SQL, versus Perl using DBI or whatever...that's fine. thanks for any feedback. Kevin ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Tracking structural changes from psql
Mike McGavin <[EMAIL PROTECTED]> writes: > It also occurs to me that an even more useful utility might be one > that stores the structural state of the database at a particular time > (such as when I last updated the production server), and then generate > a diff of SQL statements to update it to the current structural state. > I don't suppose this already exists anywhere, does it? You might want to try: http://zongle.sourceforge.net/ http://pgdiff.sourceforge.net/ http://gborg.postgresql.org/project/pgdiff/projdisplay.php -- Daniel Vérité - daniel at manitou dash mail dot org ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Using Postgres from WSAD
Has anyone had success loading the Postgres JDBC drivers into WSAD ? Whenever I attempt a connection I receive the following error: Jeffrey Stonebrook <>
Re: [GENERAL] vacuumdb is failing with NUMBER OF INDEX TUPLES NOT THE SAME AS HEAP
Lonni Friedman <[EMAIL PROTECTED]> writes: > All of a sudden last month (after about 3 years) I started getting > this warning when vacuumdb was run: > INFO: Index pg_largeobject_loid_pn_index: Pages 903; Tuples 323847: > Deleted 0.CPU 0.04s/0.07u sec elapsed 0.10 sec. > WARNING: Index pg_largeobject_loid_pn_index: NUMBER OF INDEX' TUPLES > (323847) IS NOT THE SAME AS HEAP' (323802). > Recreate the index. Hmm. Is it always that same index, or have you seen this on other indexes? I'm not aware of any known bug in 7.3.* that could lead to this sort of error. Don't suppose you have any suggestions about how to reproduce the problem? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly