Re: [GENERAL] Help with transactions
Thanks for the reply. I've notice a couple things. I ran a couple test and at first I couldn't duplicate my error on some test tables. But I then added inheritance to one of the tables and thats when I got the error again. It looks like there is an error when obtaining the seq id (nextval) from the original table and using it on a table that has a foreign key to the original table by inserting it into the table that inherits the original table, within a transaction. What I think is happening is since the insert is on the inherited table the foreign key doesn't see the insert into the original table until the transaction is committed. Here is a sample of how I duplicated my error. By running SELECT insert_data('A123456789','A','A2345'); on the below schema layout I get this error ERROR: insert or update on table "table2" violates foreign key constraint "table2_fk_id_fk" =START CREATE TABLE table1 ( id serial NOT NULL, data character(10) NOT NULL ); CREATE TABLE table2 ( id serial NOT NULL, fk_id integer NOT NULL, more_data character(5) NOT NULL ); CREATE TABLE inherit_table ( even_more_data character(1) NOT NULL ) INHERITS (table1); ALTER TABLE ONLY table1 ADD CONSTRAINT table1_pkey PRIMARY KEY (id); ALTER TABLE ONLY table2 ADD CONSTRAINT table2_pkey PRIMARY KEY (id); ALTER TABLE ONLY table2 ADD CONSTRAINT table2_fk_id_fk FOREIGN KEY (fk_id) REFERENCES table1(id) ON UPDATE RESTRICT ON DELETE RESTRICT; CREATE VIEW view_table1 AS SELECT table1.id, table1.data FROM table1; CREATE VIEW view_table2 AS SELECT table2.id, table2.fk_id, table2.more_data FROM table2; CREATE VIEW view_inherit_table AS SELECT inherit_table.id, inherit_table.data, inherit_table.even_more_data FROM inherit_table; CREATE RULE view_table1_insert AS ON INSERT TO view_table1 DO INSTEAD INSERT INTO table1 (id, data) VALUES (new.id, new.data); CREATE RULE view_table2_insert AS ON INSERT TO view_table2 DO INSTEAD INSERT INTO table2 (id, fk_id, more_data) VALUES (new.id, new.fk_id, new.more_data); CREATE RULE view_inherit_table_insert AS ON INSERT TO view_inherit_table DO INSTEAD INSERT INTO inherit_table (id, data, even_more_data) VALUES (new.id, new.data, new.even_more_data); CREATE FUNCTION insert_table2 (integer, character) RETURNS integer AS ' DECLARE table2_id INTEGER; table1_id ALIAS FOR $1; newdata ALIAS FOR $2; BEGIN table2_id = nextval(''table2_id_seq''); INSERT INTO view_table2 (id, fk_id, more_data) VALUES (table2_id, table1_id, newdata); RETURN table2_id; END; ' LANGUAGE plpgsql SECURITY DEFINER; CREATE FUNCTION insert_inherit_table (character, character) RETURNS integer AS ' DECLARE table1_id INTEGER; newdata ALIAS FOR $1; new_even_more_data ALIAS FOR $2; BEGIN table1_id = nextval(''public.table1_id_seq''); INSERT INTO view_inherit_table (id, data, even_more_data) VALUES (table1_id, newdata, new_even_more_data); RETURN table1_id; END; ' LANGUAGE plpgsql SECURITY DEFINER; CREATE FUNCTION insert_data (character, character, character) RETURNS boolean AS ' DECLARE newdata1 ALIAS FOR $1; newdata2 ALIAS FOR $2; newdata3 ALIAS FOR $3; table1_id INTEGER = 0; table2_id INTEGER = 0; BEGIN table1_id = insert_inherit_table(newdata1, newdata2 ); RAISE LOG ''Table1 ID: %'', table1_id; table2_id = insert_table2(table1_id, newdata3); IF table2_id > 0 THEN RETURN TRUE; ELSE RETURN FALSE; END IF; END; ' LANGUAGE plpgsql SECURITY DEFINER; END=== Also, in my original schema I'm getting an increment of 2 every time I run nextval. I can't duplicate this yet but I'm looking into it. Possibly my error somewhere in the function. Thanks - Stephen Howie Michael Fuhr wrote: On Fri, Mar 18, 2005 at 09:22:52AM -0500, Stephen Howie wrote: I have a java program that excepts print streams and inserts in into a spool table as a bytea. This fires a pl/pgsql trigger that passes the bytea to a pl/perl function to process the bytea and spits the results as an array back. It then proceeds to insert the data into multiple tables. Problem is that two of the tables data is inserted into inside this transaction, one has a foreign key to the other. As you can guess I get a foreign key violation because the transaction is not committed A transaction doesn't need to be committed for operations to see the effects of previous operations in the same transaction, but there could be visibility problems related to what happens when. Could you post the simplest self-contained example that demonstrates the problem? It'll be easier to understand the interactions if we can see the exact co
[GENERAL] Help with transactions
Hello all, I have a java program that excepts print streams and inserts in into a spool table as a bytea. This fires a pl/pgsql trigger that passes the bytea to a pl/perl function to process the bytea and spits the results as an array back. It then proceeds to insert the data into multiple tables. Problem is that two of the tables data is inserted into inside this transaction, one has a foreign key to the other. As you can guess I get a foreign key violation because the transaction is not committed and as far as I understand PostgreSQL does not support dirty reads or nested transactions. I have two questions. 1) what is there another way to handle this transaction that would resolve this violation without using dirty reads and 2) It looks like running the trigger after insert on a table does not run as a separate transaction. Is the insert to that table suppose to fail if the trigger fails? To me that defeats the purpose of having a trigger after insert. Thanks for any help -- - Stephen Howie begin:vcard fn:Stephen Howie n:Howie;Stephen email;internet:[EMAIL PROTECTED] tel;work:260-760-5910 tel;fax:260-436-9472 tel;cell:260-704-6262 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] 50 MB Table
On Mon, 6 Mar 2000, JB wrote: > [SNIP] > CREATE TABLE info ( > lastname char(50), > street_name char(50), > street_number char(5), > ... (a bunch of other stuff that works fine with '=') > ); > > CREATE INDEX nx_info1 ON info (lastname); > CREATE INDEX nx_info2 ON info (street_name); > > The select is as simple as this in most cases... > > SELECT * FROM info WHERE street_name LIKE 'MAIN%'; > [SNIP] might want to try CLUSTERing the table based on one of the indexes. that'll put everything in order ( physically ) and should speed it up a bit. id also suggest grabbing more ram while its (relatively) inexpensive. > [SNIP] --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "You do not have the right to free health care. That would be nice, but from the looks of public housing, we're just not interested in health care."
Re: [GENERAL] Deleting large objects sans index
On Wed, 16 Feb 2000 [EMAIL PROTECTED] wrote: > What's the quickest way to delete all PostgreSQL large objects? Is there a > system table that lists the oids of existing large objects? I expect the > command is something like: > > => select lo_unlink(SOME_ATTRIBUTE) from SOME_SYSTEM_TABLE > > but I don't which system table and which attribute! I'm tempted to do: > > % /bin/rm -f /usr/local/pgsql/base/DBNAME/xin[xv]* > > but I suspect that's a bad idea. In desperation, I might have to do: > > % destroydb DBNAME > > but I want to save that as a last resort. i dont know if this is 'safe', but: select lo_unlink( int4( substr(relname,5) ) ) from pg_class where relname like 'xinv%'; select substr(relname,5) from pg_class where relname like 'xinv%'; produces: ircbot=> select relname,substr(relname,5) from pg_class where relname like 'xinv%'; relname| substr ---+--- xinv6576385|6576385 xinv6576402|6576402 xinv6576449|6576449 xinv6576479|6576479 xinv6605697|6605697 xinv6690177|6690177 xinv6690206|6690206 xinv6690253|6690253 xinv6690268|6690268 xinv6788971|6788971 (10 rows) although the oid ( by itself ) should be present in one of the pg_* tables... > More general question: I ended up in this quandry because of a goof-- I > created a table with an oid field and then created several large objects > "linked" to the table (of course, the large objects weren't part of the > table-- the table just contained the oids of the large objects-- > nonetheless, I thought of the large objects as 'belonging' to the table). > Then I foolishly did a "delete from table;" without deleting the large > objects first-- this left me with a whole bunch of large objects to which > I had no reference. Is there a general way to a) avoid this sort of thing > (triggers??) and/or b) clean up the mess after something like this > happens? there's a library called 'lo' in $PGSQL_SRC_ROOT/contrib/lo/ that handles automagic deletes of LO's if the corresponding oid is removed. the release in 6.5.2 had a bug where it didnt check for NULL oids, though. i havent taken the time to patch it and mail it back to the dev team. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "You do not have the right to free health care. That would be nice, but from the looks of public housing, we're just not interested in health care."
Re: PostgreSQL Portable Runtime (was Re: [GENERAL] Future of PostgreSQL)
On Thu, 30 Dec 1999, Robert wrote: > Hi, > > one of the important factors that contributed to the popularity and success of > Apache, Perl, Tcl/Tk etc. was their platform independence. I'm big fan of Unix (and > even bigger of Postgres ;-), but BeOS, MacOS X, even Win2000 all look quite > interesting too and I don't want to tie myself to just one platform. MacOS X has a Unix core ( Mach 3.0 + FreeBSD ). a few people are looking into a port to MacOS X DP2 (Developer Preview, heavily NDA'ed), but they're not sure if the guts are 'feature frozen' yet. MacOS X CR1 (Customer Release) supposidly ships ~feb 2k. id expect that the port would be relatively painless, but i'm not 100% positive. Mach would be The Big Hurdle (no pun intended) in getting pgsql to work right on the MacOS X/OS X Server platform. David Wetzel ( www.turbocat.de ) has a working EOAdaptor for MacOS X Server, OPENSTEP/Mach, and OPENSTEP/NT. ive been using it for quite a few internal projects under MacOS X Server. works great. > [SNIP] --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "I've learned that you cannot make someone love you. All you can do is stalk them and hope they panic and give in."
Re: [GENERAL] pgsql 7.x...
On Sun, 26 Dec 1999, Mike Mascari wrote: > Howie wrote: > > > will this function/index problem be fixed in 7.x ? > > > > ircbot=> explain select * from logins where dttime = NOW(); > [SNIP] > emptoris=> explain select * from sales where saledate = 'now'::datetime; > NOTICE: QUERY PLAN: > > Index Scan using k_sales4 on sales (cost=2.80 rows=17 width=140) > > EXPLAIN > emptoris=> explain select * from sales where saledate='now'; > NOTICE: QUERY PLAN: > > Index Scan using k_sales4 on sales (cost=2.80 rows=17 width=140) > [SNIP] not really; just confuses me a bit more. is 'now()' not the same datatype as 'now' ? ircbot=> select now(),'now'::datetime,now()::datetime; now |?column?|datetime --++ 1999-12-27 04:25:35-05|Mon Dec 27 04:25:35 1999 EST|Mon Dec 27 04:25:35 1999 EST (1 row) ircbot=> explain select * from logins where dttime = now()::datetime; Seq Scan on logins (cost=33530.89 rows=71043 width=52) ircbot=> explain select * from logins where dttime = 'now'::datetime; Index Scan using logins_dttime_idx on logins (cost=2.54 rows=11 width=52) ircbot=> select now()::datetime = 'now'::datetime; ?column? t isnt 'NOW()' supposed to return a datetime by default? regardless, shouldnt 'now()::datetime' be a datetime ? if so, why isnt my index on dttime being used when its a direct comparison ? --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "I've learned that you cannot make someone love you. All you can do is stalk them and hope they panic and give in."
[GENERAL] pgsql 7.x...
will this function/index problem be fixed in 7.x ? ircbot=> explain select * from logins where dttime = NOW(); NOTICE: QUERY PLAN: Seq Scan on logins (cost=33530.89 rows=71043 width=52) EXPLAIN ircbot=> explain select * from logins where dttime = NOW()::datetime; NOTICE: QUERY PLAN: Seq Scan on logins (cost=33530.89 rows=71043 width=52) EXPLAIN ircbot=> select now(); now -- 1999-12-27 00:23:17-05 (1 row) ircbot=> explain select * from logins where dttime='1999-12-27 00:23:17-05'::datetime; NOTICE: QUERY PLAN: Index Scan using logins_dttime_idx on logins (cost=2.54 rows=11 width=52) EXPLAIN ( logins actually has 755,728 rows right now ) --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "I've learned that you cannot make someone love you. All you can do is stalk them and hope they panic and give in."
Re: [GENERAL] Future of PostgreSQL
On Sat, 25 Dec 1999, Bruce Momjian wrote: > > > > > > On Sat, 25 Dec 1999, Bruce Momjian wrote: > > > My big question is, what new challenges will we face as > > > we become more popular? > > > > Plug-in Oracle 7 compatibility. > > I believe we are adding Oracle compatibility as possible. We are > working on write-ahead log, long tuples, foreign keys, and outer joins. > Anything else? tablespace support ( which isnt a trivial task ), groups ( pgsql has this sort of functionality already, but i dont think its to the extent that Oracle does ), some additional grants ( 'grant connect to' ), 'alter table add constraint '... tablespace support would put pgsql s far ahead of most other rdbmses. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "I've learned that you cannot make someone love you. All you can do is stalk them and hope they panic and give in."
Re: [GENERAL] Mail to DB.
On Sat, 27 Nov 1999, Jason C. Leach wrote: > hi, > > I've read that a few of you are putting email into a postgres DB. I'd > be interested in doing something similar. Would any of you care to > share the secret that allows sendmail to deposit the email to a db, or > how you get it from /var/spool/mail/mailbox into the table when new mail > arrives? a friend and i were just talking about this... you could use procmail to spawn a program that inserts the data into pgsql. youd most likely want to use LO's for the email body, storing To:, From:, Cc:, and Subject: in a table ( for queries ). --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Tell a man that there are 400 billion stars and he'll believe you. Tell him a bench has wet paint and he has to touch it."
Re: [GENERAL] stored procedure revisited
On Wed, 13 Oct 1999, amy cheng wrote: > > >fact that it doesn't do something that most, if not all, commercially > >available db systems do can work against us, > i.e., portability and upgradability: imagine you want to change that > M$ system into Pg, or, I hate to say this, but somehow if your > success is so big that you can not live with Pg, you need go to O ect. > then, true SP will make things really easy (just systax change, you may even > just use our open source facility -- I'm sure there will be, since PL/pgSQL > are so close to other PL). In my own case, when I begin to use PL/pgSQL, I > put some thinking on the second aspect, I bet > others also did that. A true SP will make it more inviting. actually, one would hope that the system has its db independence in the application layer rather than the database layer. for instance, using something like NeXT's Enterprise Objects Framework to fetch rows from the db and translate the rows into objects, you only deal with the objects. The whole datastore, at this point, becomes irrelevant since you rarely deal with the underlying SQL -- EOF takes care of all that for you. Instead, you say "hey, i want all the objects that have their personName ivar equal to Amy" ( "personName = 'Amy'" ). I'm fairly positive that Sun's Java equivalent of EOF ( 'Entity Javabeans', iirc ) does the same sort of thing. keeping inserts/selects/etc in stored procedures would still require a rewrite of all the stored procedures when moving to another db vendor, which may or may not be a large problem depending on that vendor's imeplementation of stored procedures and SQL in general. granted, you wouldnt have to completely gut the application and rewrite the whole bloody thing, but since your app is already going to have some of it rewritten ( cant use an OCI call on postgresql ), i think it'd made more sense to abstract things further by putting all the logic into your objects, EOF or Entity Javabeans, rather than in the db. so now lets talk code reuse. both options would give you about the same level of code reuse, but in two completely different ways. stored procedures ( and company/DBA policy ) pretty much force the user to take advantage of them rather than doing raw inserts, selects, etc on the underlying tables. EOF forces you to deal with the objects rather than sql. either way, all of your business logic is in one location. by using a higher-level language, however, you wouldnt have to deal with tedious pl/sql-ish programming. one could also argue that having 20+ different stored procedures is really no better than memorizing the business logic and duplicating that in the application, bypassing the procedures altogether. if you have to deal with developing on one dbms and deploying on another dbms, EOF starts to look even more beautiful -- since your logic is in the objects, not the db, nothing will have to be ported to the new dbms. in fact, all you really need to do is change the EOModel; all of your code can remain in binary form. 'problems' with EOF-ish approaches include having to distribute your framework ( think library ) along with your app, which youd have to do anyway seeing that your objects are in that framework/package. stored procedures wouldnt have to be shared outside of the dbms ( obviously ). personally, i find it a LOT easier to deal with EOF objects rather than a potentially large PL/SQL ( or PL/pgSQL ) procedure. what'd be interesting is to compare the use of stored procedures to EOF or EOF-ish alternatives, using the same data & schema, ofcourse. NeXT/Apple has a sample db, sample data, and examples of how one can use EOF's features to augment/replace stored procedures in the dbms. (java) public void validateForDelete() throws EOValidation.Exception { if( !isPaid() ) { throw new EOValidation.Exception("You can't remove an unpaid fee"); } super.validateForDelete(); } (objective-c) - (NSException *)validateForDelete { if( ![self isPaid] ) return [NSException validationExceptionWithFormat:@"You can't remove an unpaid fee"]; return [super validateForDelete]; } and yes, i do realize that not everyone has the option of using EOF/Javabeans... nobody's perfect :) > [SNIP] > However, I would like to see data warehouse (or more moderately and > accurately data mart) support also -- the point: the priority? so either (A) work on implementing tablespaces or (B) donate some money to postgresql, inc. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Just think how much deeper the ocean would be if sponges didn't live there."
Re: [GENERAL] Foreign Key
On Wed, 6 Oct 1999, Herouth Maoz wrote: > At 01:10 +0200 on 06/10/1999, Howie wrote: > > > > for now, refint ( $PGSQL_SRC_ROOT/contrib/spi/ ) is what one should be > > using for foreign keys. requires two triggers, one on the parent and one > > on the child. works nicely. > > Does it? I was under the impression that it supported cascading deletes but > not cascading updates. CREATE SEQUENCE employee_seq START 1 INCREMENT 1; CREATE SEQUENCE expense_seq START 1 INCREMENT 1; CREATE TABLE employee ( emp_id int4 not null default nextval('employee_seq'), emp_name varchar(30) NOT NULL, PRIMARY KEY (emp_id) ); CREATE TABLE emp_expense ( emp_id int4 not null, expense_id int4 not null default nextval('expense_seq'), descr varchar(100) NOT NULL, ondate date not null, primary key (expense_id) ); CREATE TRIGGER expense_empid_fk BEFORE INSERT OR UPDATE ON emp_expense FOR EACH ROW EXECUTE PROCEDURE check_primary_key('emp_id', 'employee', 'emp_id'); CREATE TRIGGER employee_empid_propk AFTER DELETE OR UPDATE ON employee FOR EACH ROW EXECUTE PROCEDURE check_foreign_key( '1', 'cascade', 'emp_id', 'emp_expense', 'emp_id'); caffeine=> select * from employee; emp_id|emp_name --+ 2|Myself (1 row) caffeine=> select * from emp_expense; emp_id|expense_id|descr |ondate --+--+---+-- 2| 1|Test |10-06-1999 2| 2|Test #2|10-06-1999 (2 rows) caffeine=> update employee set emp_id=5; UPDATE 1 caffeine=> select * from emp_expense; emp_id|expense_id|descr |ondate --+--+---+-- 5| 1|Test |10-06-1999 5| 2|Test #2|10-06-1999 (2 rows) caffeine=> select version(); version PostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by egcc (1 row) --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Just think how much deeper the ocean would be if sponges didn't live there."
Re: [GENERAL] Foreign Key
On Tue, 5 Oct 1999 [EMAIL PROTECTED] wrote: > Hello everyone, > > I would just like to know if the Foreign key constraint feature should be a one > of a near release... Any idea? for now, refint ( $PGSQL_SRC_ROOT/contrib/spi/ ) is what one should be using for foreign keys. requires two triggers, one on the parent and one on the child. works nicely. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Just think how much deeper the ocean would be if sponges didn't live there."
Re: [GENERAL] lo_* interface ...
On Tue, 31 Aug 1999, The Hermit Hacker wrote: > > I've been asked about the performance/stability of using BLOBs (lo_*) > under PostgreSQL, and having no experience with them myself, I'm looking > for examples of sites that are, including such stats like size of the > database, max BLOB size, performance and such... i was planning on moving a mysql database that makes extensive use of BLOBs to postgres, but the LO support is very space consuming. otherwise, pgsql is great, dont get me wrong ( this is actually the only db i run that's been left in mysql-land ). the lo support is stable, or at least it seemed to be, when i was using it. there was a NULL bug/problem with the lo package in the contrib dir, though. for automatic deletes when the lo's corresponding row was deleted, one would need to use the contrib/lo/ pkg and have a trigger on the table. unfortunately, this trigger goes nuts when the lo column is null. should be an easy fix; check for NULL before trying lo_unlink(). you'd want to use the lo pkg; it just makes life easier. the size of the database ( ie: tables ) doesnt get significantly larger since the LO is stored as an OID. there's a physical file, xinv_, under the db dir, however. i never dug into the code, but the file seemed to be some sort of custom structure/format; the imported object was 1.5k, but the xinv_ file was larger. if you need specifics i can get those for you. working with LO's was somewhat easy; lo_import() reads in the data, makes a file under the db dir, and returns an oid. lo_export() takes that oid and exports the data to the filesystem. unfortunately, that brings space considerations and fs performance into play; in our app, just viewing an image required querying the db ( granted ), exporting the object from the db into the filesystem, read()ing & displaying that object, then unlink()ing it. its a round-about way of doing it, but Oracle's pretty much the same. to physically remove a LO, one would need to lo_unlink() it or use the previously mentioned lo pkg in the contrib dir. overall, the filesize of the LO's ( when compared to the actual data we sent it ) and having to 'export' the LO into the filesystem were the two reasons that the db is still mysql-based. mysql does all the BLOB stuff internally, storing the data in the table. makes for a rather large table ( ours is currently just under 200m, the pgsql-based version came in at over 500m ) and some odd displays if one did a 'select *' from the blob-table, but otherwise works nicely. just fyi, db2 has the ability to store LONG ( aka blob ) data in a separate tablespace. might be something to look into once postgres supports tablespaces. else your db dir/partition fills up _very_ quickly. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
Re: [GENERAL] PostgreSQL table data structure generator...
On Tue, 31 Aug 1999, Marzullo Laurent wrote: > Hello again, > > I propose myself for developping a prog to generate C Source > code for table structure for a Postgres Table. > > If someone have something to say about it (features wanted, new > langage, etc) (s)he's welcome. And if someone have already done > it, please stop me now. > > Aim of the project: > > Generate C/C++ Data structure for a set of table from a PostgreSQL dB. > The C data structure will be : > Struct or Class containing all the column of the table. or you could use GNUstep's ( http://www.gnustep.org ) database library ( aka Enterprise Objects Framework 1.x ), which provides an OO wrapper for database independent stuff. you dont have to deal with SQL; you deal with the objects. quite nice. NeXT ( now Apple ) has been doing that for quite some time now. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
Re: [GENERAL] Large Object questions...
On Fri, 30 Jul 1999, John Huttley wrote: > > You just haven't met the right application yet. > > in the manual there is a discussion on other methods that were previosly > used. > > The PG system is good, with minor limitations in the lo* API. > > I'm busy writing a faxserver application where all the fax page data is > stored as a blob. > > Its just so easy to use... ahhh, found a 'lo' library in contrib/lo/ which seems to handle most of my needs. had to put a trigger on the table that removed the associated object when its row was removed. the one thing i miss is the ability to determine a lo's size ( hinthintnudgenudgewinkwink ). --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
[GENERAL] Re: [SQL] referential integrity
On Wed, 4 Aug 1999, Ramanika wrote: > I did not see this in the documentation anywhere. Does postgresql > support referential integrity? like when creating a table REFERENCES > table(column_id)? not directly, no. at least not yet :) 6.6 is rumoured to have full support. for now you can use refint ( $PGSQL_SRC_ROOT/contrib/spi/refint.* ), which handles referential integrity. check_primary_key() and check_foreign_key() are the two functions involved. youd also need two triggers, one per table. i could've sworn this got added to the FAQ... --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
[GENERAL] Large Object questions...
im in the process of converting a client from mysql to postgres. this client has been storing images ( jpegs, gifs ) in their mysql database in a BLOB field. the image is then shown to their staff via the web. oddly enough, no, its not porn. so, in the process of doing all these massive imports and exports, i think i stumbled across a bug ( or what i think is a bug ) in postgres 6.5. as i said, there are a bunch of these images imported via lo_import(). when doing a 'delete from ', all the rows are wiped... but the images imported via lo_import() are still in the database's directory ( xinv* ). they're also still in the pg_type table. shouldnt the oid's ( and their corresponding xinv* files ) have been removed when i deleted the rows from the table? how does one remove these large objects from both the pg_type table and the disk ( removing the xinv* files ) ? --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
Re: [GENERAL] Foreign Keys in PostgreSQL
On Thu, 15 Jul 1999, [iso-8859-1] SimeĆ³ wrote: > How can I implement foreign Keys with postgres? thanks. pgsql/contrib/spi has refint.c. you'd want to compile that and execute refint.sql. it also comes with documentation. refit is, for now, the way postgres handles foreign keys. two functions are involved, check_foreign_key() and check_primary_key(). --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
Re: [GENERAL] New FAQ item
On Sun, 11 Jul 1999, Kaare Rasmussen wrote: > > id rephrase this to include 'inserts/updates' -- 6.5 is comparable to > > mysql for selects, given the proper indexes. > > Is this tested, or do you just hope so? tested against a 500,000 row table. pgsql seems to excell when that table gains rows left and right, most likely due to the locking: when doing a massive number of inserts from different processes along with a rather large select, mysql had the tendency to crash ( client app lost connection ). postgres will slow down a little, but all the data eventually gets into the table. > > id also stress that postgres supports (fully?) SQL92, triggers, > > I believe there's still some way to go before SQL92 is fully supported. > > Isn't outer joins, views with unions and more part of SQL92? hence the '(fully?)' bit. id have to look into the sql92 spec to see what's not implemented (yet) in postgres... im fairly positive somebody can answer this off the top of their head, however. > Now I am at trying to be annoying, how good is the ODBC / JDBC in > PostgreSQL? Can it measure up with MySQL? With Oracle? try it and find out :) id like to know, but dont have any MS-Windows machines here. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
[GENERAL] problem with PostgreSQL 6.5 on Linux
after installing pgsql-6.5 ( on a machine with pgsql-6.4.2 installed and running ), initdb creates a PG_VERSION file that still reports 6.4; subsequent psql connections fail to connect, complaining about "no compatible version of postgres found". before you ask: yes, ive made sure that im using 6.5's initdb. yes, ive made sure that initdb is looking in the 6.5 dir for its template. yes, its writing to a 6.5 dir. so, errr, what gives? --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "The distance between insanity and genius is measured only by success."
Re: [GENERAL] bug in 6.4.2?
On Mon, 31 May 1999, Bruce Momjian wrote: > > > > i was recently importing a fairly large amount of data from a mysql to > > postgres ( 6.4.2 ). to make a long story short, this involved some > > renames ( alter table foo rename to bar ). two psql clients were running, > > accessing the same database and, in some cases, querying on the same > > table. when one tried to insert into one of the recently > > dropped-and-renamed tables, it produced an error saying that an attribute > > didnt exist for . a restart of that app fixed it. > > > > fyi, the error was "get_attisset: no attribute requests in relation > > 902432" while doing an insert. a select statement just returned 0 rows, > > even though there were 1 or 2 matching the query ( verified in the psql > > window i was working in, doing the renames ). > > > > is this a known problem ? do the clients cache OIDs or some such ? is > > there anything i can do, apart from the obvious ( heh ), to make sure > > this doesnt happen again ? > > In 6.4.*, table renaming was not properly flushing the cache, I think. > Should work fine when 6.5 is released. i had figured the client was caching OIDs. _really_ looking forward to 6.5, keep up the great work! --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Do a little dance, make a little code, compile tonight... compile tonight."
[GENERAL] bug in 6.4.2?
i was recently importing a fairly large amount of data from a mysql to postgres ( 6.4.2 ). to make a long story short, this involved some renames ( alter table foo rename to bar ). two psql clients were running, accessing the same database and, in some cases, querying on the same table. when one tried to insert into one of the recently dropped-and-renamed tables, it produced an error saying that an attribute didnt exist for . a restart of that app fixed it. fyi, the error was "get_attisset: no attribute requests in relation 902432" while doing an insert. a select statement just returned 0 rows, even though there were 1 or 2 matching the query ( verified in the psql window i was working in, doing the renames ). is this a known problem ? do the clients cache OIDs or some such ? is there anything i can do, apart from the obvious ( heh ), to make sure this doesnt happen again ? thanks in advance. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Do a little dance, make a little code, compile tonight... compile tonight."
Re: [GENERAL] Why PostgreSQL is better than other commerial softwares?
On Thu, 17 Dec 1998, Albert Chen wrote: > Hi, > > I'm using FreeBSD and PostgreSQL 6.4. I tell my advisor > to try PostgreSQL, but he said:"Why PostgreSQL is better > than other commercial softwares? If you could give me > ten reasons, and I will give it a try." > Would anyone tell me what's reason you like Postgres. > Because it's free, powerful and have others, thanks. i wouldnt say one thing is 'better' than the other. it all comes down to what you need done. postgresql's pros: * highly user extensible * free source code ( a huge plus ) * easy to work with * lots of opensource software supporting it oracle's pros: * established company. sql is what they do. period. * supports damn near any SQL you can throw at it * fairly quick, even on HUGE databases ( terabytes ) oracle has some nice tools available, as does postgres. the difference is that postgres' tools are normally opensource/free whereas you pay, sometimes pay big bucks, for oracle's. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org "Oh my god, they killed init! YOU BASTARDS!"
Re: [GENERAL] slow queries
On Fri, 18 Sep 1998, Thomas Good wrote: > On Fri, 18 Sep 1998, David Hartwig wrote: > > > I would like to see it! I was not aware that table aliasing could > > have any impact on performance. > > [SNIP] > I was a bit amazed myself. Federico Passaro, on the SQL list, > helped me out some time ago when a query was failing. > His code worked so well that I filed it away for a rainy day... > this week I decided to try it and see if it helped hasten my > slowest query. It did. > [SNIP] seems that by creating a view ( with the query i mentioned before ), my queries were sped up by roughly 10 seconds... odd odd odd. --- Howie <[EMAIL PROTECTED]> URL: http://www.toodarkpark.org [[NSNotificationCenter defaultCenter] addObserver:systemAdministrator selector:@selector(disableUserAccount:) name:@"UserIsWhining" object:aLuser];