[ADMIN] Timestamps and performance problems on queries.
Hello all. I've a performance problem on specific requests : When I use timestamps + interval in where clauses, query performance is slowed down by a factor of 20 or 30 For exemple : select timestamp,value from measure where timestampnow() and timestamp(now() - '1 hour'::interval) is 20 to 30 times longer than select timestamp,value from measure where timestamp'2002-04-10 10:00' and timestamp='2002-04-10 9:00'; So where is the bottleneck? A paradigm seems that now() and (now() - '1hour'::interval) is evaluated for each row comparison... Am I right? Thus is there a way to make SQL interpreter evaluate this by rewriting them before launching any comparisons? Or do I have to rewrite all my application queries and calculate each time now() and interval predicates? Thanks by advance -- Jean-Christophe ARNU s/w developer Paratronic France (Pas Adabase que j'ai en version démo sur la distribution Suze). Et après ça, allez expliquer que les linuxiens sont pas alcolos... ;-) à quand une distribution Kro v1.6.64 ou Pastis v5.1? -+- Fantec In Guide du linuxien pervers - C'est l'heure de l'apéro ? -+- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Timestamps and performances problems
Hello all. I've a performance problem on specific requests : When I use timestamps + interval in where clauses, query performance is slowed down by a factor of 20 or 30 For exemple : select timestamp,value from measure where timestampnow() and timestamp(now() - '1 hour'::interval) is 20 to 30 times longer than select timestamp,value from measure where timestamp'2002-04-10 10:00' and timestamp='2002-04-10 9:00'; So where is the bottleneck? A paradigm seems that now() and (now() - '1hour'::interval) is evaluated for each row comparison... Am I right? Thus is there a way to make SQL interpreter evaluate this by rewriting them before launching any comparisons? Or do I have to rewrite all my application queries and calculate each time now() and interval predicates? Thanks by advance -- Jean-Christophe ARNU s/w developer Paratronic France Notre devoir, pour leur bien et pour le bien de Linux, est de te flinguer avant que tu ne les sacrifies (fût-ce avec les meilleures intentions du monde). -+- TP In: Guide du linuxien pervers : De la pédagogie par l'Exemple ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[ADMIN] Query planner quirk?
I have a 2 million+ table, mbz_rpt_item_val: bank_id (integer) item_name (character(16)) org_allow (integer) time_id (character(10)) item_value(character varying(12)) and an index test_idx2: time_id org_allow bank_id The query planner seems to choose a very dumb method when I tell it to explain the following query: explain SELECT d.bank_id, d.time_id, d.item_name, d.item_value FROM mbz_rpt_item_val d WHERE d.bank_id IN (277,535,739,1234,1330) and d.time_id IN ('P_6') and d.org_allow = 1; Seq Scan on mbz_rpt_item_val d (cost=0.00..81988.51 rows=36 width=45) This query takes about 20 seconds. However, if I explicitly tell it not do do sequential scans (SET ENABLE_SEQSCAN TO OFF), and explain it again, it reluctantly decides to use the index even though it thinks the cost is higher, but the query comes back in 2 seconds. I turn the sequential scan back on, and it goes back to doing a sequential scan, taking 20 seconds. When I do the exact same query with a very large list of bank_id's (maybe 500 or so), THEN it decides on its own to use the index, returning in about 18 seconds, which is great. Any comments? I'm using postgres 7.2 and did a full vacuum analyze before trying this. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[ADMIN] Max Number of Databases on a Server.
What is the practical limit of Postgres Databases on one typical PC class Linux server? If I created 100 separate databases on a Linux Server would this be a problem? Or is the issue more related to concurrent connections? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Please help
Hi Folks I am trying to write a function that should archive old values to an archiving table before update on the orignal table but it inserts both old and new values to the archiving table here is the code: CREATE FUNCTION fn_archive_01() RETURNS OPAQUE AS ' BEGIN /* TG_OP is the function (UPDATE, DELETE, SELECT) */ INSERT INTO customer_archive VALUES (OLD.id, OLD.name,current_user,now(),TG_OP); IF TG_OP = ''UPDATE'' THEN RETURN NEW; END IF; RETURN OLD; END; ' LANGUAGE 'plpgsql'; CREATE TRIGGER tr_customer_archive_del BEFORE DELETE ON customer FOR EACH ROW EXECUTE PROCEDURE fn_archive_01(); Thanks for your help Tariq ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] performance tests
If you're going to run tests on foreign-key performance, please use 7.2.*. I just installed 7.2.1. I run 7.1.3 in parallel. after I run the script to introduce 1000 rows in T1 in each version, deleting is much faster with 7.1.3 than with 7.2.1 (I took the same configuration parameters for both servers...). When running the delete query in parallel, 7.1.3 finishes really faster. For inserts, the 7.2.1 is slightly faster for the first 3000 inserts, from inserts from 3000 to 5000 it's more or less equivalent I'll go further in documentation to look if I forgot something, but if you have any hints, they're welcome :-) Raph ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Doubts in connectivity
Maybe it's your firewall that's blocking connections, check ipchains. -- Rasmus T. MohrDirect : +45 36 910 122 Application Developer Mobile : +45 28 731 827 Netpointers Intl. ApS Phone : +45 70 117 117 Vestergade 18 B Fax : +45 70 115 115 1456 Copenhagen K Email : mailto:[EMAIL PROTECTED] Denmark Website : http://www.netpointers.com Remember that there are no bugs, only undocumented features. -- -Oprindelig meddelelse- Fra: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]Pa vegne af sivagurunathan shanmugham Sendt: 8. april 2002 06:57 Til: [EMAIL PROTECTED] Emne: [ADMIN] Doubts in connectivity Hi, I have installed postgres in red hat linux. the pblm. is i couidn't able to connect from the client machine which is also running in linux. i have tried by intiating the postmaster with -i option but still i coudn't able to connect. can u clear me the following the doubts 1. how do i connect postgres from a client machine where postgres is installed and running successfully in local. 2. how to shutdown the postgres database. 3. how to create user with passwords Kindly clear me the doubts With regds Shiva.. __ Do You Yahoo!? Yahoo! Tax Center - online filing with TurboTax http://taxes.yahoo.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Problems with PostgreSQL
Hi. Mi name is Juan Pablo, I'm from Argentina and in my job we are using PostgreSQL as a database server.But I have a problem using triggers (ON INSERT). I can create a trigger without problem, but when it is called, I get the following error message: "fmgr_info: function 18375: cache lookup failed". I used the pl/pgsql language to create the trigger. Could you help me, please ? Thanks for your time. Juan Pablo
[ADMIN] Databases in memory (--flashdrive problem)
Hi all together, i hope i'm at right list... Is there an opportunity to configure postgre to keep databases in Memory? I need to store a lot of data (many little entries) on IDE- flashdrive. But flashdrives have a little problem with write-cycles. Each flash-cell can be rewritten approxiatly 300.000 times. This means for me: there is no chance to store my db directly on flashdrive. If i could let my databases stay in RAM (maybe RAM-Drive, if so) i may do a backup to flashdrive in a defined frequency!? Some thoughts? Please give me some hints... tia Christian ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] upgrade
Ciao Gianmarco; You can upgrade your pg installation by using the rpm version. download the rpm version, the latest at the moment could be the 7.2, and try to upgrade your intstallation using the RPM command. p.s. man rpm -- Auri On Fri, 12 Apr 2002, Gianmarco Piola wrote: I want to report a problem... ... i am a newbie with Linux system and i barely know his structure. I installed Linux Red Hat and during the installation i choose to install php and pgsql. After some initial problem to discover the directory where the pgsql was installed all worked well but then i discovered that the version was 7.0. I then decided to upgrade but how? I downloaded the new file but the documentation(also the admin guide) on how to remove the old version (it is splitted in MORE then one dir) and to install the new is poor. And also there is no mention if i must do something with php for the pgsql 7.2 because my version is compiled for 7.0. If you can reply please tell me where to find this info or who can i contact. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] table dump
Hi Jodi, Its to stop the triggers firing when you restore the new database from the dumped file. Imagine that the triggers have already done their work on other tables when the data was first inserted or later changed. With pg_dump you are taking a snap shot that you would later wish perhaps to restore. The foreign key, table and column constraints are not turned off because their enforcement is still valid at restore time. For example the data may have mysteriously been damaged in the database or in the dump file and you would thus need to know about this situation at restore time. The dumped file is thus copied back at restore time not inserted. Regards Mark Carew Brisbane Australia. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Max Number of Databases on a Server.
[EMAIL PROTECTED] (David A. Leedom) writes: If I created 100 separate databases on a Linux Server would this be a problem? Or is the issue more related to concurrent connections? If you're talking about multiple databases served by a single postmaster, I don't believe it's much of an issue. You might start to notice performance problems when you got past a few thousands, but that'd have more to do with kernel performance (slow lookup for many files in one directory) than Postgres itself. Concurrent connections is a much more interesting question, but no one can give you any interesting answers without a lot more assumptions than typical Linux server. How much iron is that really, and what are your concurrent users going to be doing? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] ODBC Problem
Friends, Yesterday I upgraded from 7.1.3 on RedHat 6.2 to 7.2.1 on RedHat 7.2, and now I'm having problems with ODBC connections from access on windows. The problem I'm having is that I am unable to change or delete data; it says that The microsoft jet database engine stopped the process because you and another user are attempting to change the same data at the same time. I'm able to delete/change data from psql and from my Perl scripts. Any suggestions on what I might have done wrong? Thanks, Peter Darley ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Databases in memory (--flashdrive problem)
Christian Gerdes [EMAIL PROTECTED] writes: Is there an opportunity to configure postgre to keep databases in Memory? I need to store a lot of data (many little entries) on IDE- flashdrive. But flashdrives have a little problem with write-cycles. Each flash-cell can be rewritten approxiatly 300.000 times. This means for me: there is no chance to store my db directly on flashdrive. If i could let my databases stay in RAM (maybe RAM-Drive, if so) i may do a backup to flashdrive in a defined frequency!? Sure: keep the database on a RAM-drive (this will mean an initdb and data load on each bootup, hope you can stand that) and use pg_dump to a text file on flashdrive as your backup mechanism. Use a cron task to run pg_dump at whatever frequency suits you. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] psql command line history not working
I am trying to get command line history and editing to work with psql, as life is a pain without it. I'm running PostgreSQL 7.1.3 on Linux. Documenation on this subject is very archaic, but I have done the following so far: - installed readline library - run ./configure with the following command line options --with-libs=/usr/local/lib --with-includes=/usr/local/include I am still getting quite a few errors in the config.log file when I run gmake: - after checking for readline there are a bunch of errors, such as undefined reference to tputs, undefined reference to tgoto, etc. followed by failed program was: #line 3111 configure - in the block beginning with int main() { readline() I get cannot find -ledit --- but I haven't been able to find the edit library anywhere - the following block gives undefined reference to setproctitle when checking for setproctitle in -lutil - int main() { setproctitle.. in turn fails because of cannot find lsocket again, I can not find the socket library anywhere and so on... Does anyone have a list of what needs to be installed to get psql command line history to work on Linux ? (and WHERE I can find these libraries, etc..?) Thanks In Advance, Charlie ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Extending table name max length
Tony Griffiths [EMAIL PROTECTED] writes: I had assumed that there would be a single header file that sets the max name length, but it loos like the esql precompiler didn't pick up the amended max name length. Did you remember to recompile ecpg? AFAICT it uses the same NAMEDATALEN symbol as everyplace else. In general, when making a system-wide configuration change like that one, it'd be a good idea to make clean at the top level, then do a full make, make install, initdb cycle. If you did all that, another possibility is an include file search path problem: perhaps the ecpg compile picked up a previously-installed postgres_ext.h with the wrong NAMEDATALEN value. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] ODBC Problem
Friends, A little more information: I'm actually only having this problem with some tables that have triggers on insert, update and delete. A table and the assocaited triggers/functions shown below. I suspect that something changed in the triggers or functions in 7.2, but I can't find any documentation on that anywhere. As before, any help would be greatly appreciated. Thanks, Peter Darley Table answers_bool Column | Type |Modifiers ---+--+- - id| integer | default nextval('all_answers_seq'::text) sampleid | integer | value | boolean | indicator | text | surveyid | integer | time | timestamp with time zone | default now() Indexes: idx_answers_bool_indicator, idx_answers_bool_sampleid, idx_answers_bool_surveyid Unique keys: answers_bool_idx Triggers: answers_bool_update, answer_bool_delete, answers_bool_insert CREATE TRIGGER answer_bool_delete AFTER DELETE ON answers_bool FOR EACH ROW EXECUTE PROCEDURE delete_time(); CREATE TRIGGER answers_bool_insert AFTER INSERT ON answers_bool FOR EACH ROW EXECUTE PROCEDURE insert_time(); CREATE TRIGGER answers_bool_update BEFORE UPDATE ON answers_bool FOR EACH ROW EXECUTE PROCEDURE update_time(); CREATE FUNCTION delete_time() RETURNS opaque AS ' BEGIN INSERT INTO answers_deleted (SampleID, Indicator) VALUES (OLD.sampleid, OLD.indicator); RETURN NULL; END; ' LANGUAGE 'plpgsql'; CREATE FUNCTION insert_time() RETURNS opaque AS ' BEGIN DELETE FROM answers_deleted WHERE SampleID=NEW.sampleid AND Indicator=NEW.indicator; RETURN NULL; END; ' LANGUAGE 'plpgsql'; CREATE FUNCTION update_time() RETURNS opaque AS ' BEGIN NEW.time := ''now''; RETURN NEW; END; ' LANGUAGE 'plpgsql'; -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Peter Darley Sent: Monday, April 15, 2002 8:41 PM To: Pgsql-Admin Subject: [ADMIN] ODBC Problem Friends, Yesterday I upgraded from 7.1.3 on RedHat 6.2 to 7.2.1 on RedHat 7.2, and now I'm having problems with ODBC connections from access on windows. The problem I'm having is that I am unable to change or delete data; it says that The microsoft jet database engine stopped the process because you and another user are attempting to change the same data at the same time. I'm able to delete/change data from psql and from my Perl scripts. Any suggestions on what I might have done wrong? Thanks, Peter Darley ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] string PK vs. interger PK
Can anyone offer some insight as to what should be considered when choosing between a primary key that is an integer vs. a primary key that is a string value? Does one or the other affect indexing speed? update speed? Is a serial integer value better than using a PK that has some "value" in its meaning but is a string? Thanks Jodi ___Jodi L KanterBioInformatics Database AdministratorUniversity of Virginia(434) 924-2846[EMAIL PROTECTED]
Re: [ADMIN] string PK vs. interger PK
As a general rule, a primary key (or any relationship key) should not contain a value that means something beyond its use in relating entities. The problem is that if a field describes an object, the day may come when you want to change the description, but you *never* want to change the field that identifies the record and relates it to other records. I can't offer insider's knowledge on speed of int vs string, but it seems very reasonable to think that an int would be faster. On the other hand, I'm forced to use strings in my app, and the performance doesn't suffer much so they must be close. -Nick -- Nick Fankhauser [EMAIL PROTECTED] Phone 1.765.935.4283 Fax 1.765.962.9788 Ray Ontko Co. Software Consulting Services http://www.ontko.com/ -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Jodi Kanter Sent: Monday, April 15, 2002 1:56 PM To: Postgres Admin List Subject: [ADMIN] string PK vs. interger PK Can anyone offer some insight as to what should be considered when choosing between a primary key that is an integer vs. a primary key that is a string value? Does one or the other affect indexing speed? update speed? Is a serial integer value better than using a PK that has some value in its meaning but is a string? Thanks Jodi ___ Jodi L Kanter BioInformatics Database Administrator University of Virginia (434) 924-2846 [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] string PK vs. interger PK
Jodi: This opinion has less to do with Postgres and more to do will a general philosophy but we always use a 4 byte integer column as the primary key for almost every table. None of the following reasons means it has to be that way, but all support Ints over strings: 1) Strings would normally be bigger than 4 byte integers. While I have no experience to say this size issue could contribute to a perceived performance issue, theoretically speaking, a smaller, more compact value would be better. 2) Int processing of equality and inequality comparisons should be much faster than string compares. However, again I have no experience to say this size issue could contribute to a perceived performance issue. 3) Ints make it easier to employ Postgres Sequences in generating PK values. These are very nice and quick. 4) Lastly, I'll just agree with the other replies that I've already seen that you should avoid putting any kind of meaningful data into PK columns. That fact that you want to consider strings almost implies meaning to the values. For example, if these values are dictated by an outside source which happened to use an alphanumeric value, then they carry meaning to an outside source even if that meaning is that it is simply their own unique identifier. We've been faced with this question many a time. We almost always chose an int PK and more often than not when we've strayed from this course, it's come back to haunt us. Hope this is helpful. Marc Mitchell - Senior Application Architect Enterprise Information Solutions, Inc. 4910 Main Street Downers Grove, IL 60515 (630) 512-0570 [EMAIL PROTECTED] - Original Message - From: Jodi Kanter [EMAIL PROTECTED] To: Postgres Admin List [EMAIL PROTECTED] Sent: Monday, April 15, 2002 1:56 PM Subject: [ADMIN] string PK vs. interger PK Can anyone offer some insight as to what should be considered when choosing between a primary key that is an integer vs. a primary key that is a string value? Does one or the other affect indexing speed? update speed? Is a serial integer value better than using a PK that has some value in its meaning but is a string? Thanks Jodi ___ Jodi L Kanter BioInformatics Database Administrator University of Virginia (434) 924-2846 [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] string PK vs. interger PK
On 15 Apr 2002 at 14:20, Nick Fankhauser wrote: As a general rule, a primary key (or any relationship key) should not contain a value that means something beyond its use in relating entities. The problem is that if a field describes an object, the day may come when you want to change the description, but you *never* want to change the field that identifies the record and relates it to other records. FWIW, I would recommend not using business values as a primary key (and by extension, as a foreign key). Business values are just that: business values. Sure, put unique constraints on business values if you want. Just don't make them a primary key or a foreign key. -- Dan Langille The FreeBSD Diary - http://freebsddiary.org/ - practical examples ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] Killing a child process
Every once in a while I'll spot a query that is taking unusually long time to run (a chiled process that keeps the processor at 100% for like 3 minutes). Is there a way to see the queries currently executing and/or kill a single process without affecting the operation of the server (I'm not sure, but I think I tried a simple kill once and that it killed the whole server)? Thanks, Bojan ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Killing a child process
Bojan Belovic [EMAIL PROTECTED] writes: Every once in a while I'll spot a query that is taking unusually long time to run (a chiled process that keeps the processor at 100% for like 3 minutes). Is there a way to see the queries currently executing ps, top? Also see the stats views, if you are running 7.2. and/or kill a single process without affecting the operation of the server kill -INT to a backend will issue a query cancel, which is pretty safe. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Query planner quirk?
Implicitely use the index. explain SELECT d.bank_id, d.time_id, d.item_name, d.item_value FROM mbz_rpt_item_val d WHERE d.bank_id IN (277,535,739,1234,1330) and d.time_id IN ('P_6') and d.org_allow = 1 ORDER BY time_id,org_allow,bank_id ; Actually, just the time_id might be enough, depending on if any other indexes are out there. - brian On Sat, 13 Apr 2002, Dave Menendez wrote: I have a 2 million+ table, mbz_rpt_item_val: bank_id (integer) item_name (character(16)) org_allow (integer) time_id (character(10)) item_value(character varying(12)) and an index test_idx2: time_id org_allow bank_id The query planner seems to choose a very dumb method when I tell it to explain the following query: explain SELECT d.bank_id, d.time_id, d.item_name, d.item_value FROM mbz_rpt_item_val d WHERE d.bank_id IN (277,535,739,1234,1330) and d.time_id IN ('P_6') and d.org_allow = 1; Seq Scan on mbz_rpt_item_val d (cost=0.00..81988.51 rows=36 width=45) This query takes about 20 seconds. However, if I explicitly tell it not do do sequential scans (SET ENABLE_SEQSCAN TO OFF), and explain it again, it reluctantly decides to use the index even though it thinks the cost is higher, but the query comes back in 2 seconds. I turn the sequential scan back on, and it goes back to doing a sequential scan, taking 20 seconds. When I do the exact same query with a very large list of bank_id's (maybe 500 or so), THEN it decides on its own to use the index, returning in about 18 seconds, which is great. Any comments? I'm using postgres 7.2 and did a full vacuum analyze before trying this. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html Wm. Brian McCane| Life is full of doors that won't open Search http://recall.maxbaud.net/ | when you knock, equally spaced amid those Usenet http://freenews.maxbaud.net/ | that open when you don't want them to. Auction http://www.sellit-here.com/ | - Roger Zelazny Blood of Amber ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] psql command line history not working
Charlie Toohey writes: I am trying to get command line history and editing to work with psql, as life is a pain without it. Check that you have the readline header files installed (and if configure finds them; readline.h, history.h or similar). Look for RPMs readline-devel or some such. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] psql command line history not working
[getting readline support ] - installed readline library Did you install the header files ? If you are installing a binary that is compiled to use the library, the library is enough. If you are compiling from scratch, you need the .h files as well. On Debian potato the command is: apt-get install libreadline4-dev Hope this helps - Original Message - From: Charlie Toohey [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, April 15, 2002 1:53 PM Subject: [ADMIN] psql command line history not working I am trying to get command line history and editing to work with psql, as life is a pain without it. I'm running PostgreSQL 7.1.3 on Linux. Documenation on this subject is very archaic, but I have done the following so far: - installed readline library - run ./configure with the following command line options --with-libs=/usr/local/lib --with-includes=/usr/local/include I am still getting quite a few errors in the config.log file when I run gmake: - after checking for readline there are a bunch of errors, such as undefined reference to tputs, undefined reference to tgoto, etc. followed by failed program was: #line 3111 configure - in the block beginning with int main() { readline() I get cannot find -ledit --- but I haven't been able to find the edit library anywhere - the following block gives undefined reference to setproctitle when checking for setproctitle in -lutil - int main() { setproctitle.. in turn fails because of cannot find lsocket again, I can not find the socket library anywhere and so on... Does anyone have a list of what needs to be installed to get psql command line history to work on Linux ? (and WHERE I can find these libraries, etc..?) Thanks In Advance, Charlie ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html