[GENERAL] A lock to fit a key..
I need to figure out a way to allow users to only be able to update a certain column in a table. Does anyone have any ideas to start with? David ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: chr() in 7.0.3 !
On Sun, Aug 19, 2001 at 08:46:55PM +, omid omoomi wrote: hi all, I have pg 7.0.3 installed. I can use ascii() to get the ascii code of a character but I can not use chr() to reverse the function... is it a matter of the pg version I have? If so, is there any other way to get the text from the ascii number? 7.1 rename ichar() to chr() --- in 7.0.3 you must use ichar() or rather update to 7.1. Karel -- Karel Zak [EMAIL PROTECTED] http://home.zf.jcu.cz/~zakkr/ C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Overwriting Operator for numeric and float8 fails
On Sun, 19 Aug 2001 10:23:39 +0200, Johann Zuschlag wrote: I get the following error: Unable to identify an operator '=' for types 'numeric' and 'float8'... snip create function numeric_eq(numeric,float8) returns bool as '' language 'internal'; create operator = ( leftarg=numeric, rightarg=float8, procedure=numeric_eq, commutator='=', negator='', restrict=eqsel, join=eqjoinsel ); The Problem is: It doesn't work and the backend process crashes and get's restarted. I tried it with 7.03 and I guess 7.1.3 behaves the same. What am I doing wrong? Can somebody give me a hint? I appreciate your help, but I can't change the queries of my (commercial) application. So first thing I did, was doing the changes in the ODBC driver. But still, the above stated overwriting should work one way or other. Any ideas how to do it correctly? Johann Johann Zuschlag [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Finding database names for overnight vacuum
G'day all. I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. It currently says: for q in /usr/local/pgsql/data/base/*; do blaa blaa done How does one get an equivalent list of all databases from 7.1.2? (and where is it documented?) Cheers and thanks, Stephen Davies Stephen Davies Consulting [EMAIL PROTECTED] Adelaide, South Australia. Voice: 08-8177 1595 Computing Network solutions. Fax: 08-8177 0133 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Finding database names for overnight vacuum
I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. It currently says: for q in /usr/local/pgsql/data/base/*; do blaa blaa done How does one get an equivalent list of all databases from 7.1.2? (and where is it documented?) Not sure where it's documented, but checkout contrib/oid2name in the source ball. - Brandon b. palmer, [EMAIL PROTECTED]pgp:crimelabs.net/bpalmer.pgp5 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Finding database names for overnight vacuum
Stephen Davies wrote: G'day all. I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. It currently says: for q in /usr/local/pgsql/data/base/*; do blaa blaa done How does one get an equivalent list of all databases from 7.1.2? (and where is it documented?) psql -l documented in psql man page So for your purposes, you want: $ psql -lt | awk '{print $1}' | grep -v template0 bray junk lfix stjohns template1 -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C But I would not have you to be ignorant, brethren, concerning them which are asleep, that ye sorrow not, even as others which have no hope. For if we believe that Jesus died and rose again, even so them also which sleep in Jesus will God bring with him. I Thessalonians 4:13,14 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] Finding database names for overnight vacuum
Out of curiosity, whats wrong with vacuumdb -a? (Vacuum all databases.) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Stephen Davies Sent: Monday, 20 August 2001 7:33 PM To: [EMAIL PROTECTED] Subject: [GENERAL] Finding database names for overnight vacuum G'day all. I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. It currently says: for q in /usr/local/pgsql/data/base/*; do blaa blaa done How does one get an equivalent list of all databases from 7.1.2? (and where is it documented?) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: Finding database names for overnight vacuum
Quoting Stephen Davies: I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. It currently says: for q in /usr/local/pgsql/data/base/*; do blaa blaa done How does one get an equivalent list of all databases from 7.1.2? (and where is it documented?) In an Oracle install, one has a file /etc/oratab (or something less world-wide like /usr/share/oracle/oratab) which lists the available Oracle instances. It also indicates which need to be started at system boot -- ie. the script which starts the Oracle instances at system boot checks this file for which need to be started. Someone recently posted a patch/contrib item to do this for Postgres. A logical extension would be to have another field which indicates which databases need to be VACUUMed or VACUUM ANALYZEd when the approriate cron job executes. Cheers, Colin ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Re: is this possible? it should be!
On Sun, Aug 19, 2001 at 01:49:00PM -0400, [EMAIL PROTECTED] wrote: Hello I have statements (highly simplified just to get the point across) like select a,b,c from a where d=2 order by e limit 10; Now I think that because of order by the above query already knows the result of the below query select count(*) from a where d=2; The point is that I want to know the total number of matches and I also want to use limit. And I don't want to do two queries. Shouldn't: select a,b,c, count(*) from a where d=2 order by e limit 10; do what you want? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Re: is this possible? it should be!
On Mon, Aug 20, 2001 at 07:44:56AM -0500, Bruno Wolff III [EMAIL PROTECTED] wrote: Shouldn't: select a,b,c, count(*) from a where d=2 order by e limit 10; do what you want? Upon further review, I see that this construct isn't valid. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Re: Future: Oracle corporation goes bankrupt!! (CIA, KGB,British Intelligence reports)
I have one word for you, alavoor, aka Al Dev. I can't say more or I would have to kill you. :-) I'm sorry Christ tjis appears to be a careful, appropriately worded and reasoned post. What on earth were you thinking off we want ill-informed unsubstantiated claims or we want nothing -- Niall Litchfield Oracle DBA Audit Commission UK Legal disclaimer required by my employer ** This email contains information intended for the addressee only. It may be confidential and may be the subject of legal and/or professional privilege. Any dissemination, distribution, copyright or use of this communication without prior permission of the addressee is strictly prohibited. ** Chris Weiss [EMAIL PROTECTED] wrote in message news:9kh5r5$13on$[EMAIL PROTECTED]... Most of the users of Oracle are engineers, IT staff, and business people who deal in reality and not hyperbole. If you can publish meaningful results in terms of cost savings, performance, reliability, tunability, etc., meaning if you can give us a truly useful argument for switching to Postgres, some of us might listen to you. When you engage in gross exaggerations with no basis in reality, you not only cause people to lose interest in your product, you drive them away and discredit the Open Source movement. Open Source has had a profound impact on software development and on the internet as a whole. Of my five computers that I use for development, three are Linux, and I use Postgres to support my mail server, and MySQL to support Bugzilla. However, when someone like you comes on a business oriented news group and spams the group with what sounds like the ravings of a delusional schizophrenic, it makes me wonder how good my Open Source software truly is. You would do yourself and your product a big favor if you would consider talking to a professional about your delusions, or at least get someone close to the product to edit your rantings before you post them Chris alavoor [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... What happens if a large group of Oracle customers join hands in internet and develop PostgreSQL ??? This guy Oracle Corporation is finished!! All the technology gurus and Economic gurus predict that Oracle will be doomed because of rising and booming internet (which is rapidly fueling PostgreSQL). The Number 1 enemy of Oracle corporation is the Oracle customer himself!! Oracle sales must carefully watch their enemy The Oracle Customers!! All the Oracle customers will be sneaking into PostgreSQL secretly without the knowledge of Oracle sales persons. Intelligence agencies from all the 300 countries will be watching the drama of Oracle corporation... and the internet and PostgreSQL!! SQL server is very important for the WORLD ECONOMY !! SQL server is very important for the economy of each and every country and every company in the world. Intelligence agencies from USA like CIA, Russian KGB, Chinese Intelligence, Japanese Intelligence, British Intelligence and many many others will be looking at rapid pace of development of PostgreSQL and rapid development of internet!! Everybody is keeping PostgreSQL a top computing secret!! There will be hundreds of millions of downloads of PostgreSQL around the globe!! Oracle Corporation MUST STOP saying that they are SQL database company, if Oracle Corp says they SQL database company then they will go out of business!! (investors will pull out all their money from Oracle stocks) Oracle Corp must do something else to attract the investors!! Watch out Oracle Corp!! PostgreSQL axe is falling on it at a thundering speed!! Visit the PostgreSQL at http://www.postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] database corruption on VACUUM ANALYZE, now unable to initdb
Corey Minter [EMAIL PROTECTED] writes: I don't understand how I wouldn't be able to run initdb. How much free disk space have you got? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Cant compile using stock RH71 and libpq++
Can anyone please help (before I go insane). In learning C++, QT and PostgreSQL I have created a very simple program to connect to my database but I cannot compile it. I really cannot see what I am doing wrong. Postgress is running and was fully installed when I installed Redhat 7.1. The libpq++ header is truly in the specified path. I can successfully compile and connect using C. Thanks in advance. John Bend. Here is the simple program: #include iostream #include libpq++.h int main() { PgDatabase db(dbname=template1); cout Program pg1.cpp finished endl; } I compile it with the line: g++ pg1.cpp -I/usr/include/pgsql -lpq And get the following error: /tmp/ccqlJ23X.o: In function `PgDatabase::PgDatabase(char const *)': /tmp/ccqlJ23X.o(.PgDatabase::gnu.linkonce.t.(char const *)+0x13): undefined reference to `PgConnection::PgConnection(char const *)' /tmp/ccqlJ23X.o: In function `PgDatabase::~PgDatabase(void)': /tmp/ccqlJ23X.o(.gnu.linkonce.t._._10PgDatabase+0x1d): undefined reference to `PgConnection::~PgConnection(void)' /tmp/ccqlJ23X.o: In function `PgDatabase type_info function': /tmp/ccqlJ23X.o(.gnu.linkonce.t.__tf10PgDatabase+0x10): undefined reference to `PgConnection type_info function' /tmp/ccqlJ23X.o(.gnu.linkonce.t.__tf10PgDatabase+0x18): undefined reference to `PgConnection type_info node' collect2: ld returned 1 exit status ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] pg 7.x: Allowing non table owners to COMMENT ON xyz... ?
Can a user who does not own a table be set up to run COMMENT commands for a table? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Loading, but not running a query
Is there an equivalent to '\i' that will load up the query buffer but not run the query? (I have a query stored in a file that I want to modify on the fly, but not overwrite the query file.) -- Jeff Boes vox 616.226.9550 Database Engineer fax 616.349.9076 Nexcerpt, Inc. [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] time interval question
given a timestamp data, e.g. 2001-08-18 06:01:23+08 How can I check that is it lie between 00:00:00 and 08:00:00 and ignore the date? -- content-type:kiss/hug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Probably an old bug, but here you go anyway...
Hi, I'm working with 7.0.2, so this might be an old bug. I saw some other bugs mentioned in this newsgroup involving to_char were said to be resolved for 7.1.0, but I just thought I'd volunteer it anyway... db1=# select flastpostdate, to_char(flastpostdate, 'hh:mi:ss mon-dd-yy am') from tThreads order by flastpostdate desc limit 1; flastpostdate |to_char +--- 2001-08-14 13:22:02-07 | 01:22:02 aug-14-01 am (1 row) It displays results that should be pm as am. Seems to work for other hours, though... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] A libpq++ question.
Hi all, My question is: Is this library thread safe??? Has anybody ever tried to use objects from this lib in multiple threads??? Regards Hicnar ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] spool
how do spool my results to a file. I know there is a command spool on and off. How do I use it? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] insert failed
Hi, when we try insert into a table, the inset is failing with cach look up failure error. Below is the complete statement and error generated. DEPG= insert into EXTRANET_SECTION (ID, section_name, parent,extranetname) values (255,' Main',0, 'test'); ERROR: fmgr_info: function 19464: cache lookup failed DEPG= ANy help !!please ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
RE: [GENERAL] Calling stored procedures.
You need to declare the variable ret, i.e. add a line before BEGIN like: DECLARE ret integer; Your function will not work without that. You may still get an error for some other unknown reason, or you may not. I have found that plpgsql errors can be highly uninformative, or downright misleading. Hi. I've created some stored procedures. For example: create function validate_user(varchar, varchar) RETURNS integer AS ' BEGIN SELECT count(*) INTO ret FROM users WHERE userid = $1 and passwd = $2; RETURN ret; END; ' LANGUAGE 'plpgsql'; When I try to call it with the command, select validate_users('user', passwd'), I get this error message: Error Number: -2147467259 Error Description: Error while executing the query; ERROR: fmgr_info: function 19264: cache lookup failed I did some research and found that this error message occurs when a stored procedure refers to another function that has been dropped from the database and that to avoid this error you must drop all objects referring to this stored procedure and recreate them. The problem is that I get this error message for NEWLY created procedures? Any ideas about this? I don't know what this OID 19264 is referring to. Thanks. John ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Re: Perfomance decreasing
On Fri, Aug 17, 2001 at 02:09:33PM +0600, Ivan Babikov wrote: In this case, however, I think he may be understating too much. I read the original question as PostgreSQL is not useful for production systems. Call me melodramatic if you like: you are probably right. The point, I guess, is this: it would be really useful to have a document somewhere that honestly described the limitations of (the current version of) PostgreSQL. Do you mean Postgres becomes very weak when the size of a database achieves 1.5Gb or something close to it? Maybe this is one of typical questions, but I have heard people complaining that Postgres is just for quite small bases. Now we have to choose a free database for then inexpensive branch of our project and Interbase looks better at capability to work with quite big bases (up to 10-20Gb). I am not sure now that Postgres will work with bases greater than 10Gb, what does All think? Thanks in advance, Ivan Babikoff. In my experience postgresql has no problems with big databases. I have had several problems but they had to do with the os and hardware not the db. - Einar Karttunen ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] multiple table indexes
Does Postgres support multiple table indexes? I want to update one table with data from another, but it's very slow because I have a rather bulky WHERE clause. I was wondering if I could index the two tables together to make execution faster. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Real newbie question.
Hi Bill, Have you solved this yet? When the psql prompt becomes a ' it means you've started a text string and not ended it. i.e. : psql# INSERT INTO foo (stringfield) VALUES ('blah blah blah); psql'# You need to type another ' then press enter to finish the string. Once on a new line, type \r to clear the buffer of the command you stuffed, or \e to open up a window which edits it. i.e. : psql# INSERT INTO foo (stringfield) VALUES ('blah blah blah); psql'# safdsadfa psql'# Uh oh.. something wrong psql'# Aha... psql'# ' psql# \r Query buffer reset. psql# INSERT INTO foo (stringfield) VALUES ('blah blah blah'); :-) Hope that's helpful, did it from memory, so the exact output may look slightly different. Regards and best wishes, Justin Clift William D. Tallman wrote: Just started fiddling with PostgreSQL, and know nothing of databases. I have the documentation that came with the Mandrake-8.0 installation written by Thomas Lockhard, and I've perused Bruce Momjian's book online, and I cannot find the answer to this problem: When I am entering values into a table, and make a mistake, sometimes there is an error message, and sometimes the equal sign in the prompt becomes a single quote. Error messages I can puzzle out, but I have not discovered how to recover from the single quote in the prompt. I wind up having to kill the terminal and start all over Can someone point me to the explanation of this? Thanks, and I hope this question isn't too rudimentary for this NG. Bill Tallman ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] WAL
Hi, this is my first message to the list. I've been looking for some documentation regarding WAL configuration. Particulary, i want to know if there's a way to inhibit WAL for a certain table or for certain periods of time (i.e., during batch loads, when i don't really care for transaction logging since if anything goes wrong i can just restart the whole process without affecting the integrity of the table). I've searched the archives before asking and the only thing related to this was found on the hackers list, it explained how to limit the size and number of the WAL files, but it involved a patch to the code. I don't have a problem with patchin the source if this is the last resource, but it would be great if this could be done with a configuration directive (i didn't find anything related to this on the admin manual so this is probably impossible) or with a SQL command. Any help on this subject will be greatly appreciated!! :). Thanks in advance and sorry for the long post. -- Fernando Ipar - [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Re: Perfomance decreasing
On Friday 17 August 2001 15:09, Ivan Babikov wrote: In this case, however, I think he may be understating too much. I read the original question as PostgreSQL is not useful for production systems. Call me melodramatic if you like: you are probably right. The point, I guess, is this: it would be really useful to have a document somewhere that honestly described the limitations of (the current version of) PostgreSQL. Do you mean Postgres becomes very weak when the size of a database achieves 1.5Gb or something close to it? Maybe this is one of typical questions, but I have heard people complaining that Postgres is just for quite small bases. Now we have to choose a free database for then inexpensive branch of our project and Interbase looks better at capability to work with quite big bases (up to 10-20Gb). I am not sure now that Postgres will work with bases greater than 10Gb, what does All think? I do not see any problems. It works for me, and I have no problems. The only problem you could have is with vacuum. It is solvable anyway. But if you have not so much updates it is not an issue too (I mean if do not update more than 25% of DB each day). Actually for anyone listening for such advices I would recommend to create a test installation, and stress test it before go to production. Interbase has its own problems. -- Sincerely Yours, Denis Perchine -- E-Mail: [EMAIL PROTECTED] HomePage: http://www.perchine.com/dyp/ FidoNet: 2:5000/120.5 -- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Problems with UPDATE
If orf is not associated with a primary key or unique index, the possible combinations might well exceed 14,000. For example, if 3 records in yeast1 and 4 records in temp share the same orf value, the query will update yeast1 12 times (i.e. the 3 yeast1 records will be updated 4 times each). Can you clarify (for the server or for us) the relationship between the tables? What primary keys or indexes do these tables have? Andrew --- Evan Zane Macosko [EMAIL PROTECTED] wrote: Hello everyone, I am having a strange problem using the UPDATE function in Postgresql. I want to move a large number (6000) of entries in one table to another, so here is my syntax: UPDATE yeast1 SET c1 = temp.profile_1 FROM temp WHERE yeast1.orf = temp.orf; This query results in over 14000 entries being added! I end up with multiples of the same entry, which obviously is undesirable. Does anyone have any suggestions on how to fix this? thanks! Evan ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) __ Do You Yahoo!? Make international calls for as low as $.04/minute with Yahoo! Messenger http://phonecard.yahoo.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Perfomance decreasing
On Mon, Aug 20, 2001 at 02:41:05PM -0400, [EMAIL PROTECTED] wrote: Does it help if you drop and recreate the indexes, in addition to the vacuuming you're doing now? I think this was suggested not long ago on this list. I reduced the number of times vacuum was run with analyze, and run a normal vacuum twice a day. The files are no longer growing since. I'm leaving for hollidays now, I'll investigate more when I return next week. /erwin Erwin Lansing [EMAIL PROTECTED] on 08/14/2001 04:38:59 AM To: [EMAIL PROTECTED] cc:(bcc: Wesley Sheldahl/Lex/Lexmark) Subject: Re: [GENERAL] Perfomance decreasing On Tue, Aug 14, 2001 at 02:06:40PM +0600, Alexander Loginov wrote: Hello. I have a question about perfomance. I'm running PostgreSQL 7.1.2 at FreeBSD 4.3. For the first 1-2 days of running perfomance is excellent. But after that, speed began to decrease. And after a week of operation, perfomance falls 8-10 times, than at first day of using. I'm doing vacuum periodically (once a hour), but perfomance still falls down. After that I dump database as text file, make dropdb createdb and after that, restore database from dump - Perfomance is excellent again (for 1-2 days). Why this situation occures? May be I must use VACUUM ANALYSE instead of VACUUM? I have actually the same problem, also FreeBSD 4.3, pgsql 7.1.2. I do use VACUUM ANALYSE quite often. The problem in the end gets that bad that perl-jobs cannot perform any SELECTs, or at least they stop returning results before dbi times out. So far I have tracked the problem down to the size of the database in the filesystem, where problems start occurring when it exceeds 1,4 Gb. A dump/drop/create/restore reduces files size to approx. 350 Mb. Any pointers would be helpful as a weekly dump/restore is not quite optimal :) /erwin -- Erwin Lansing --http://droso.org You've got mail ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- Erwin Lansing -- http://droso.org You've got mail ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] nextval, sequences and sequencenames
I think someone pointed out not long ago that that naming convention isn't always 100% reliable. How about creating the sequence dynamically (using that naming convention even), then create the table dynamically and have it explicitly use the sequence you just created. That way you *know* the name of the sequence; should probably check to be sure the name isn't already in use, or trap for errors when creating the sequence and adjust accordingly. Gordan Bobic [EMAIL PROTECTED] on 08/15/2001 04:14:02 AM To: Wieger Uffink [EMAIL PROTECTED] cc: PostgreSQL-General [EMAIL PROTECTED] (bcc: Wesley Sheldahl/Lex/Lexmark) Subject: Re: [GENERAL] nextval, sequences and sequencenames On Tue, 14 Aug 2001, Wieger Uffink wrote: My question: is there anyway of retreiving the sequence_name corresponding to the respective column, knowing just the tablename and columnname? The reason I need to do this, is because the application I write dynamicly creates new tables, and I have no way of knowing the sequence name before hand. Try tablename_fieldname_seq. That is the default sequence name for a serial field. Cheers. Gordan ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [ODBC] Re: [GENERAL] Updating a view
Hi, Thanks to everyone who answered my questions.. I got the problem solved. Thanks Ryan -Original Message- From: Fernando Schapachnik [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 15, 2001 12:28 PM To: Ryan C. Bonham Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: [ODBC] Re: [GENERAL] Updating a view I'm not sure about Postgres in particular, but updating a view is not always possible. Consider CREATE VIEW TEST AS SELECT sum(money) FROM table; What would it mean to update an aggregate? The propper and abstract way to do that in general is to define an stored procedure (they would be kind of the setters of the object) and a view (kind of the getters). Regards. En un mensaje anterior, Ryan C. Bonham escribió: Hi, Ok i know this is something stupid, somebody mind explaining it to me? Also sorry if this gets posted twice, i got a error back the first time.. I have a Visual Basic Project that access a PostgreSQL Datbase. One Form pulls information from a View in postgres. If teh user changes anything on the form the update fails. I tried running hte dame update from PGAdmin and it also fails with the following message. I just dont understand what i need to create i guess.. Thanks for the help.. Fernando P. Schapachnik Planificación de red y tecnología VIA NET.WORKS ARGENTINA S.A. [EMAIL PROTECTED] Tel.: (54-11) 4323-3381 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Is it possible to work with big databases (up to 10Gb) under Postgres?
Hi Ivan, I was running a database with ~27GB of info (admittedly in large, very simple tables) on PostgreSQL 7.0.3 a while ago for a few months. Didn't have any real problems with it, apart from having to accept that a lot of data takes time to process. :-/ Regards and best wishes, Justin Clift Ivan Babikov wrote: Hello All! Maybe this is one of those typical questions, but I have heard people complaining that Postgres is just for quite small bases, less than 2Gb. Now, we have to choose a free database engine for an inexpensive branch of our project and Interbase looks better in sense of capability to work with quite big bases (up to 10-20Gb). I am not sure that Postgres will work fine with databases greater than 10Gb. What does All think of it? Thanks in advance, Ivan Babikoff. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] ODBC error
Michelle Murrain wrote: I'm trying to isolate a problem - using Cold Fusion, and sending to pg a large text block, I'm getting an error: ODBC Error Code = 08S01 (Communication link failure) Error while executing the query; Query string is too long Is this a Cold Fusion problem, or a postgresODBC driver problem? I know that pg now has no limit on text objects. Using pg 7.1, on Debian stable, using CF 4.5 What version of the ODBC driver and can you post the CF code? Jochem ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Is it possible to work with big databases (up to 10Gb) under Postgres?
Hi Ivan! Well, I don't have a 10 Gb database _yet_; but my server manages 3 databases that currently total over 17 Gb without a problem. (Also, linking tables from different databases by querying via ODBC and MS Access works like a charm!) I used MySQL for a while; and considered Interbase. I moved from MySQL because you can't update one table using data from another table via the standard update SQL statement in MySQL. I never went the Interbase route because at that time: 1. It only imported fixed-length text tables; and 2. The email list I joined seemed dead. Things may have changed since then. The online documentation for PostgreSQL has been very helpful; and the support on this email list is very good. I hope this helps, Andrew Gould --- Ivan Babikov [EMAIL PROTECTED] wrote: Hello All! Maybe this is one of those typical questions, but I have heard people complaining that Postgres is just for quite small bases, less than 2Gb. Now, we have to choose a free database engine for an inexpensive branch of our project and Interbase looks better in sense of capability to work with quite big bases (up to 10-20Gb). I am not sure that Postgres will work fine with databases greater than 10Gb. What does All think of it? Thanks in advance, Ivan Babikoff. __ Do You Yahoo!? Make international calls for as low as $.04/minute with Yahoo! Messenger http://phonecard.yahoo.com/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] COUNT(*) very slow on table with primary key
On Tue, 14 Aug 2001, Sheldon Hearn wrote: I've been having trouble with a slow query on a table that has a primary key (which I understand is supported in PostgreSQL by a UNIQUE non-NULL index). It's a SELECT COUNT(*) without a WHERE clause; really simple stuff. I would have thought the number of entries in the primary key index could simply be inspected and returned. EXPLAIN reports that a sequential table scan is being performed, which sounds right, since the actual operation takes about 10 seconds when there are about half a million rows in the table. The table's pretty wide (about 2.5KB), but has been subjected to a VACUUM ANALYZE operation in accordance with the advice given in the FAQ. Is this just how it is or is there more stuff I could look at? I'm actually trying to speed up a more complex query, but figured it would be better to report (and get feedback on) the simplest case. IIRC, Because the tuple validity status information is stored in the data file and not the index, it's got to read the row from the data file to see if it's currently valid to your transaction (and the index becomes more expensive at that point since you'll be reading all the heap rows anyway) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Perfomance decreasing
Does it help if you drop and recreate the indexes, in addition to the vacuuming you're doing now? I think this was suggested not long ago on this list. Erwin Lansing [EMAIL PROTECTED] on 08/14/2001 04:38:59 AM To: [EMAIL PROTECTED] cc:(bcc: Wesley Sheldahl/Lex/Lexmark) Subject: Re: [GENERAL] Perfomance decreasing On Tue, Aug 14, 2001 at 02:06:40PM +0600, Alexander Loginov wrote: Hello. I have a question about perfomance. I'm running PostgreSQL 7.1.2 at FreeBSD 4.3. For the first 1-2 days of running perfomance is excellent. But after that, speed began to decrease. And after a week of operation, perfomance falls 8-10 times, than at first day of using. I'm doing vacuum periodically (once a hour), but perfomance still falls down. After that I dump database as text file, make dropdb createdb and after that, restore database from dump - Perfomance is excellent again (for 1-2 days). Why this situation occures? May be I must use VACUUM ANALYSE instead of VACUUM? I have actually the same problem, also FreeBSD 4.3, pgsql 7.1.2. I do use VACUUM ANALYSE quite often. The problem in the end gets that bad that perl-jobs cannot perform any SELECTs, or at least they stop returning results before dbi times out. So far I have tracked the problem down to the size of the database in the filesystem, where problems start occurring when it exceeds 1,4 Gb. A dump/drop/create/restore reduces files size to approx. 350 Mb. Any pointers would be helpful as a weekly dump/restore is not quite optimal :) /erwin -- Erwin Lansing --http://droso.org You've got mail ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] time interval question
Hi, You could do something along the lines of : SELECT foo, bar FROM baz WHERE date_part('hour', foo) 0 AND date_part('hour', foo) 8; Simple, but effective. Only works for whole hours like in your example though. :-) Regards and best wishes, Justin Clift Cefull Lo wrote: given a timestamp data, e.g. 2001-08-18 06:01:23+08 How can I check that is it lie between 00:00:00 and 08:00:00 and ignore the date? -- content-type:kiss/hug ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Problems with UPDATE
orf is a primary key. It is a unique string that links the tables. Essentially, the setup is this: I am presented with data called transcription profiles. These contain two columns: orf, which is a unique representation of a biological gene, and induction, which is a number. Now, unfortunately, these profiles do not contain all of the known orfs, and also they can sometimes contain orfs that do not exist. so, when I want to load in a profile to my database, i need to select all the inductions from temp whose orfs exist in the master table. This master table, yeast1, is very big--perhaps 600 columns. Basically, I'm just adding columns to the database, and I'm looping it in Perl DBI. I wondered if there was a fast batch way to do that. On a related note, I had another question for everyone: I need to do calculations with these induction numbers--I need to select the entire table of inductions, load them into a matrix, and pass them to a computational C program through perl/cgi. This step is obviously quite rate-limiting, as it takes a very long time to select all from such a large table (yeast1). perhaps my database design is a little messed up--basically, I'm hoping someone will tell me the best way to load, store, and access this kind of data. Hope this clarifies! Thanks everyone, Evan On Mon, 20 Aug 2001, Andrew Gould wrote: If orf is not associated with a primary key or unique index, the possible combinations might well exceed 14,000. For example, if 3 records in yeast1 and 4 records in temp share the same orf value, the query will update yeast1 12 times (i.e. the 3 yeast1 records will be updated 4 times each). Can you clarify (for the server or for us) the relationship between the tables? What primary keys or indexes do these tables have? Andrew --- Evan Zane Macosko [EMAIL PROTECTED] wrote: Hello everyone, I am having a strange problem using the UPDATE function in Postgresql. I want to move a large number (6000) of entries in one table to another, so here is my syntax: UPDATE yeast1 SET c1 = temp.profile_1 FROM temp WHERE yeast1.orf = temp.orf; This query results in over 14000 entries being added! I end up with multiples of the same entry, which obviously is undesirable. Does anyone have any suggestions on how to fix this? thanks! Evan ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) __ Do You Yahoo!? Make international calls for as low as $.04/minute with Yahoo! Messenger http://phonecard.yahoo.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Problems with installation v 7.1.2
Hicnar [EMAIL PROTECTED] writes: Hello, I was trying to install PostgreSQL ver 7.1.2, for RH 6.2, but some packages depend on ssl.so.0 and crypto.so.0 libraries. The question is: what packages contain those libs??? openssl. -- Trond Eivind Glomsrød Red Hat, Inc. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] ERROR: relation_info: Relation 41069 not found
Hi, Ok i am lost.. I added a filed to a table, now when i try to do a select on that table, or open a view based on that table i get teh following message. I get the following message in both phpPGAdmin and PGAdmin 12 and my VB program. Can someone tell me what i did... Thanks.. PostgreSQL said: ERROR: relation_info: Relation 41069 not found Ryan ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] backend closing unexpectedly
Adam Kornick [EMAIL PROTECTED] writes: I have a moderately complex query running on small views (10-30 records) which used to work, but is now dying unexptecedly. Sounds like a backend core dump. Please look for the core file (should be under $PGDATA/base/yourdbnumber/) and send us a debugger backtrace. You might have to recompile with --enable-debug to get a useful (symbolic) backtrace. If you are feeling out of your depth with that request, instead put together a self-contained example that triggers the crash, and send it along. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] Is it possible to work with big databases (up to 10Gb) under Postgres?
I believe if you throw decent hardware at it (the sort of hardware you need to throw at 10GB database to make it perform well) then it is definitely worth giving it a go. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Ivan Babikov Sent: Monday, 20 August 2001 9:06 PM To: [EMAIL PROTECTED] Subject: [GENERAL] Is it possible to work with big databases (up to 10Gb) under Postgres? Hello All! Maybe this is one of those typical questions, but I have heard people complaining that Postgres is just for quite small bases, less than 2Gb. Now, we have to choose a free database engine for an inexpensive branch of our project and Interbase looks better in sense of capability to work with quite big bases (up to 10-20Gb). I am not sure that Postgres will work fine with databases greater than 10Gb. What does All think of it? Thanks in advance, Ivan Babikoff. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] where is PostgreSQL log ?!?!
Hi, Hello, I'm new in using postgreSQL, I'm now tring to use RedHat 7.1 , Jserv with postgresql, but there've got an error: An error occured while getting the authentication request as I've try to createuser 'simon' with password, create user db as 'simon' . the above error is using 'simon' + password and connection to 'simon' db I'm trying to trace that from the log. there've /var/log/pgsql which is 0 bytes . Am I open the right log file of postgresql ?? Thx for reading this message !! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] 7.0.3 on AIX 4.2.1.0.06
I got this cd from GreatBridge that has 7.0.3 on it, so I tried to do a configure on my AIX 4.2.1.0.06 machine. I have egcs and gnu make installed from the bull archives. configure says that xlc is not installed, and gcc(egcs) cant create executables. I wrote a simple hello program and it compiled fine. Ideas? Walter L. Preuninger II ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Re: Missing Sequence File
On Mon, 20 Aug 2001 18:19:49 + (UTC), Killian May [EMAIL PROTECTED]: Hi, I'm a little new to postgres, and I'm having trouble droping a sequence. I get the following error: ERROR: mdopen: couldn't open scotty_scottyid_seq: No such file or directory when I try to recreate the sequence I get the following: gsheet= create sequence scotty_scottyid_seq; ERROR: Relation 'scotty_scottyid_seq' already exists Which version are you running? select version(); should give a precise answer. Show the command you are using to drop the sequence like you do here for creating the sequence. Can you get values from the sequence? select nextval('scotty_scottyid_seq') and show the output. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] Calling stored procedures.
Hi. I've created some stored procedures. For example: create function validate_user(varchar, varchar) RETURNS integer AS ' BEGIN SELECT count(*) INTO ret FROM users WHERE userid = $1 and passwd = $2; RETURN ret; END; ' LANGUAGE 'plpgsql'; When I try to call it with the command, select validate_users('user', passwd'), I get this error message: Error Number: -2147467259 Error Description: Error while executing the query; ERROR: fmgr_info: function 19264: cache lookup failed I did some research and found that this error message occurs when a stored procedure refers to another function that has been dropped from the database and that to avoid this error you must drop all objects referring to this stored procedure and recreate them. The problem is that I get this error message for NEWLY created procedures? Any ideas about this? I don't know what this OID 19264 is referring to. Thanks. John ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] What's up with FTP?
Hi, Been trying to download the latest ODBC driver for two days now but keep getting max connection rejects. Alternatives? Thanks, -km ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: Searchable documentation (www.opensql.org)
Thanks for the suggestion. I may wait a few weeks until I can selectively search one set of documentation at a time, instead of all the indexed documents. That's why I've only indexed PSQL docs for now. Matt Colin 't Hart [EMAIL PROTECTED] wrote in message news:9l0qin$1phr$[EMAIL PROTECTED]... Matthew, Maybe you should frame the pages from the source. You should also probably add Interbase and SAP DB, as these are also Open Source databases. Cheers, Colin ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Real newbie question.
William D. Tallman [EMAIL PROTECTED] writes: When I am entering values into a table, and make a mistake, sometimes there is an error message, and sometimes the equal sign in the prompt becomes a single quote. Error messages I can puzzle out, but I have not discovered how to recover from the single quote in the prompt. I wind up having to kill the terminal and start all over Can someone point me to the explanation of this? The single quote in the prompt means that you've type an unclosed single quote. -Doug -- Free Dmitry Sklyarov! http://www.freesklyarov.org/ We will return to our regularly scheduled signature shortly. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] New RPMS ?
On Monday 20 August 2001 18:09, Trond Eivind Glomsrød wrote: I am not talking just about PostgreSQL now. It is not nice to provide support for a number of customers without Unix and Linux culture and having to recompile everything from sources. :-( If you want newer packages, get Red Hat Linux 7.1. For some folks with older machines, 6.2 is as high as it goes. Ever install 7.1 on a 486? This fellow is in Brazil -- and may not have even a Pentium, for all we know. PostgreSQL 7.1.x, for the record, runs (well, it jogs at least) on even an old 486DX4-100 I had RH 6.2 installed on (48MB RAM). (That machine went on the mission field) -- Lamar Owen WGCR Internet Radio 1 Peter 4:11 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Re: Problems with UPDATE
On Mon, 20 Aug 2001 22:08:22 + (UTC), [EMAIL PROTECTED]: orf is a primary key. It is a unique string that links the tables. Essentially, the setup is this: I am presented with data called transcription profiles. These contain two columns: orf, which is a unique representation of a biological gene, and induction, which is a number. So, you have a list of orf, induction Now, unfortunately, these profiles do not contain all of the known orfs, and also they can sometimes contain orfs that do not exist. so, when I want to load in a profile to my database, i need to select all the inductions from temp whose orfs exist in the master table. This master Not sure what is temp... is this the list from above? table, yeast1, is very big--perhaps 600 columns. Basically, I'm just Not sure what the number of columns has to do with it. Do you have each orf in its own column for some reason? UPDATE is used to modify a previously existing row. Is that what you are wanting to do? Or are you wanting to insert new rows? adding columns to the database, and I'm looping it in Perl DBI. I wondered if there was a fast batch way to do that. This is what makes me think you are adding columns for new orfs. Seems like a bad idea to me if that is what you are doing. (Of course the whole thing has me quite confused, soo...) On a related note, I had another question for everyone: I need to do calculations with these induction numbers--I need to select the entire table of inductions, load them into a matrix, and pass them to a computational C program through perl/cgi. This step is obviously quite rate-limiting, as it takes a very long time to select all from such a large table (yeast1). perhaps my database design is a little messed up--basically, I'm hoping someone will tell me the best way to load, store, and access this kind of data. Hope this clarifies! You might try posting your schema... though then again, with 600 columns maybe that is not such a good idea. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Re: is this possible? it should be!
On Mon, Aug 20, 2001 at 04:56:29PM -0700, Tony Reina wrote: Perhaps GROUP BY will get you where you want to go: select count(*), a, b, c from a where d=2 group by a, b, c order by e limit 10; Here count(*) doesn't give total count i.e. grand total count if there is no limit. What would be nice is if pg would return 10 rows but declare at the bottom of the display to give total rows number. This way DBI can just do $n=$sql-total_rows; or something like that. I think it requires a major hack on postgres? No? I don't think it will be any additional cpu cost to return total number of rows since sorting needs to know all rows and hence total number of rows ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] backend closing unexpectedly
Folks, I have a moderately complex query running on small views (10-30 records) which used to work, but is now dying unexptecedly. The query is via aolserver, but also creates the same response when using pgsql. Within pgsql, I get: pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. Within aolserver, I get: Error: dbinit: error(localhost::acs,pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. I'm running postgresql-7.1.2-5PGDG, on RedHat 7.1...thoughts? Thanks, Adam ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] Re: unix timestamp
how can i use unix timestamp as a data type? the datetime data type should work. you can find info about it at http://www.postgresql.org/idocs/index.php?functions-datetime.html http://www.postgresql.org/idocs/index.php?datatype-datetime.html here is some example code CREATE TABLE test ( idinteger PRIMARY KEY, mytime datetime ); #include time.h ... time_t mytime = time (NULL); printf (time to be inserted into database: %s\n, ctime (mytime)); /* insert row setting the time. if you want to insert the current time you can pass 'now' to a datetime field */ sprintf (sql_str, INSERT INTO test (id, mytime) VALUES (1, '%s'), ctime (mytime)); res = PQexec(dbconn, sql_str); ... /* get row extracting the time since the epoch */ sprintf (sql_str, SELECT id, extract (epoch from mytime) FROM test WHERE oid = %d, PQoidValue(res)); ... mytime = atoi(PQgetvalue(res, 0, 1)); printf (time retrieved from database: %s\n, ctime(mytime)); ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] How do you recover a postgres db?
The documentation is non-existent. Can someone tell me or point me to how to do a recovery? This is the only thing keeping me from using PostgreSQL over Oracle. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Accessing the execution plan via the DBI interface
Is there a way to access the execution plan of a query using the DBI interface? Since EXPLAIN seems to write to STDERR, there is no way to use prepare-execute-fetch ... Does anybody know, what to do best - I don't want to use a system call from inside Perl. Hans ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How do you recover a postgres db?
Mark McWilliams [EMAIL PROTECTED] writes: The documentation is non-existent. Can someone tell me or point me to how to do a recovery? This is the only thing keeping me from using PostgreSQL over Oracle. What do you mean by recovery? I can think of at least three possibilities: (1) Restore from backup: simply feed the output from 'pg_dump' to 'psql' as the PG superuser. (2) Point-in-time recovery after a restore: PG doesn't do it yet. (3) Starting up the database after a system crash: just do it. PG will generally recover up to the last committed transaction. -Doug -- Free Dmitry Sklyarov! http://www.freesklyarov.org/ We will return to our regularly scheduled signature shortly. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Re: is this possible? it should be!
On Mon, 20 Aug 2001 [EMAIL PROTECTED] wrote: On Mon, Aug 20, 2001 at 04:56:29PM -0700, Tony Reina wrote: Perhaps GROUP BY will get you where you want to go: select count(*), a, b, c from a where d=2 group by a, b, c order by e limit 10; Here count(*) doesn't give total count i.e. grand total count if there is no limit. What would be nice is if pg would return 10 rows but declare at the bottom of the display to give total rows number. This way DBI can just do $n=$sql-total_rows; or something like that. I think it requires a major hack on postgres? No? I don't think it will be any additional cpu cost to return total number of rows since sorting needs to know all rows and hence total number of rows I don;'t know if it's considered now, but that query (the original) doesn't necessarily require a sort step. With an index on (d,e) I think you could do the query using the index for both the constraint (d=2) and the ordering stopping when you have 10 rows. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] spool
Jova wrote: how do spool my results to a file. I know there is a command spool on and off. How do I use it? In psql, use \o. There is also the command-line option -o. Check the manpage for psql for full details. -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C For the Lord himself shall descend from heaven with a shout, with the voice of the archangel, and with the trump of God; and the dead in Christ shall rise first; Then we which are alive and remain shall be caught up together with them in the clouds, to meet the Lord in the air; and so shall we ever be with the Lord. I Thessalonians 4:16,17 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] GUI Application mode
mike wrote: Is there any GUI application mode for Unix that can run PostgreSQL? Thank you in advance. pgaccess (which is included with PostgreSQL source). -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C For the Lord himself shall descend from heaven with a shout, with the voice of the archangel, and with the trump of God; and the dead in Christ shall rise first; Then we which are alive and remain shall be caught up together with them in the clouds, to meet the Lord in the air; and so shall we ever be with the Lord. I Thessalonians 4:16,17 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] GUI Application mode
Hi Folks, Is there any GUI application mode for Unix that can run PostgreSQL? Thank you in advance. -- Best Regards, Mike Web Programmer TKO Systems, Ltd. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] where is PostgreSQL log ?!?!
Look at the startup script you used. Sodom wrote: Hi, Hello, I'm new in using postgreSQL, I'm now tring to use RedHat 7.1 , Jserv with postgresql, but there've got an error: An error occured while getting the authentication request as I've try to createuser 'simon' with password, create user db as 'simon' . the above error is using 'simon' + password and connection to 'simon' db I'm trying to trace that from the log. there've /var/log/pgsql which is 0 bytes . Am I open the right log file of postgresql ?? Thx for reading this message !! -- Joseph Shraibman [EMAIL PROTECTED] Increase signal to noise ratio. http://www.targabot.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Handling db errors within transactions
I have many DBI-based query sequences that look like this: *** begin transaction *** query #1: select ... query #2: update ... query #3: select ... *** end transaction *** If query #2 fails due to an exception (say, a duplicate key error), I can identify that error **per query** via the DBI API, but then there appears to be no possibility of even accessing the db again until/unless a rollback is issued. All subsequent queries result in the famous line NOTICE: current transaction is aborted, queries ignored until end of transaction block I know this is an issue that has been discussed previously and a true fix involves a ton of work that nobody really wants to do; I'm looking more for workaround ideas. One obvious (and very ugly) solution is to wrap every db access with logic to catch errors and additionally then issue a rollback and begin a new transaction to enable additional queries. I like to find a cleaner way to manage errors at a finer level of control granularity that still enabled delegation of logic without major contortions. Does anyone have any experience with a cleaner manner in which to handle this issue? Regards, Ed Loehr ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] New RPMS ?
On Sunday 19 August 2001 22:10, Fernando Lozano wrote: Are there plans for 7.1.3 RPMS for Red Hat 6.2 and other systems using the older glibc and rpm format? Yes, once I have enough hard drives to have another system image installed. Due to several other issues, I won't be using either a VMware or similar system, nor do I dual-boot multiple distributions. I prefer having about a 3GB drive for just development -- and I currently do not have a Red Hat 6.2 system installed. You certainly ARE welcome to install RPM 3.0.5 or above on your own RH 6.2 box and rebuild the source RPM -- but I cannot at present do that. And, once a Red Hat 7.2, or 8.0, or whatever Red Hat comes out with next materializes, I will likely migrate my servers to it as well. My apologies. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: Syntax for wildcard selection
The field is, indeed, a char(17) field. This particular database is actually a copy of the same schema we use in our Informix applications. The definitions for that system were almost completely correct for creating the PostgreSQL version, thus many fields are defined as char(x). I shall try redefining those fields that are of variable length as varchar() and see what happens. Thanks... The only thing I can think of is that you are using char() fields and the like is getting confused by the trailing spaces. Certainly putting wildcards anywhere in the string works fine. What is the data type of your column? text and varchar() wouldn't suffer from the above problem. -- Martijn van Oosterhout [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Finding database names for overnight vacuum
Stephen Davies writes: I have a cron job which vacuums etc all of my databases but it does not work with 7.1.2 because of the change in structure of .../data/base. vacuumdb --all -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]