RE : [GENERAL] Postgreqsl Package
Birahim FALL a écrit : It seems that there's no concept of PACKAGE in PL/pgSQL as in Oracle PL/SQL. Is is definitely that? or did I missed something? or is it planned for a future version? Hello, This feature is in the TODO list : Commands -- SERVER-SIDE LANGUAGES (cf. http://developer.postgresql.org/todo.php) I also hope that will be added in a future version, I sometimes receive this question from people that moves from Oracle to PostgreSQL. There is a workaround for that : You can create schemas and add functions inside theses schemas. See the Shridhar's anwser :-) Regards, --- Bruno BAGUETTE - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Trigger
On Thu, 23 Oct 2003, Alvaro Herrera wrote: On Thu, Oct 23, 2003 at 08:16:27AM +0100, Peter Childs wrote: On Wed, 22 Oct 2003, scott.marlowe wrote: On Wed, 22 Oct 2003, Peter Childs wrote: Is it possible to deferr a trigger until commit, Or to have the trigger not occur if the transaction is rolled back? Like transaction. Background, we are trying to get the database to tell clients when records get updated, deleted or inserted so that they can update there on-screen displays without having to query the database every couple of seconds which would put an unnessary strain on the database. Hence producing quicker respose times. You should probably be using an AFTER trigger ... when those get executed, the transaction is ready to commit and will not abort (barring any major problems, like your server go nuts or something). This is not true, actually. After triggers generally happen at the end of statement (with the exception of deferred constraint triggers and some wierdness with functions) and can themselves throw an exception condition so even barring internal problems, it's unsafe to assume that an exception won't happen after your trigger runs. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Setting up DSPACE for Postgres access
On Thursday 23 October 2003 14:20, Ashwin Kutty wrote: I am currently testing a product by the name of dspace (http://www.dspace.org) and require installing java since dspace is completely java based. The problem is that during its installation it needs to talk to pgsql via jdbc and I am getting the error: Exception occurred:org.apache.commons.dbcp.DbcpException: java.sql.SQLException: Driver not found for URL: jdbc:postgresql://dspace.library.dal.ca:5432/dspace org.apache.commons.dbcp.DbcpException: java.sql.SQLException: Driver not found for URL: jdbc:postgresql://dspace.library.dal.ca:5432/dspace It's complaining that it cannot find a driver. The obvious steps are: 1. Check you have your JDBC driver installed and configured. 2. Check you can connect using it. 3. Check it is in the relevant PATH How far along this process have you got? If you've done all those, we'll have to figure out why it can't find the driver. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Setting up DSPACE for Postgres access
It's complaining that it cannot find a driver. The obvious steps are: 1. Check you have your JDBC driver installed and configured. Yes. I have done so. Included in the Classpath env var as well. I have gone so far as to downloading it from jdbc.postgresql.org rather than using the compiled one I have. Still no luck. 2. Check you can connect using it. Via a test program? Done, no luck.. Same ol same ol.. 3. Check it is in the relevant PATH Besides the Classpath, does it need to be in the Path as well? If so, no I havent added it in, but will do so if its required. How far along this process have you got? I am actually to the point of pulling teeth, cause I have added every jar file I can think of and find in the classpath, re-installed everything and still the case remains the same. I am guessing its just not finding it due to some foolish little quirk I am missing some place. I even have tcp_ip turned on btw in postgres to accept connections both from the box itself and another machine through which I am using pgadmin and connecting fine. If you've done all those, we'll have to figure out why it can't find the driver. I thought and still under the impression that its looking in the classpath for this and I have edited /etc/profile to include it, checked env before running the program and it according to it the jar file is included fine and the classpath does have it; but still nothing. Checked file permissions on the jar file and its fine as well. I have made changes to the user's bash_profile as well to see if its a user related issue and still nothing. Su'ed in as root and tried it out and yet nothing. Its like for some reason the file just cant be found. Any ideas and assistance would be greatly appreciated. Thanks. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Setting up DSPACE for Postgres access
Whenever I need to use the postgresql JDBC driver in my programs, I have to manually include the line: Class.forName(org.postgresql.Driver); somewhere *before* I call DriverManager.getConnection. Try this in your test program and see if it works. Unfortunately, you may not be able to modify the installer to support this, but there might be another way? HTH! -- Dardo D. Kleiner Connection Machine Facility, Center for Computational Sciences Naval Research Laboratory (Washington, DC) [EMAIL PROTECTED] -- 202.404.7019 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] PostgreSQL v7.4 Beta5 Available for Testing
On Thu, 2003-10-23 at 01:49, Marc G. Fournier wrote: ... With suitable testing, and few bugs reported, we hope to get an RC1 out the door within the next 7 to 10 days, leading up to a full release of v7.4 ... ... so, if you haven't tested her out yet, please do so ... Debian packages are in the experimental part of the Debian archive. -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight, UK http://www.lfix.co.uk/oliver GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C I press toward the mark for the prize of the high calling of God in Christ Jesus. Philippians 3:14 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Recomended FS (correction)
Mark Kirkwood wrote: I should have said that I was using Freebsd 4.8 with write caching off. write caching *on* - I got myself confused about what the value 1 means ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Trigger
On Thu, 23 Oct 2003, Tom Lane wrote: Peter Childs [EMAIL PROTECTED] writes: Great idea shame drivers to get at these are rare. Anyway you still need a trigger to fire the notify and these get sent when the query is done not when its commented. hmmm But the NOTIFY isn't delivered until and unless the transaction commits. This gets around the AFTER-trigger-can-still-roll-back problem. regards, tom lane Notify is also not very flexable it tells you somthing has triggerged it not the information that a trigger is supplied with, like what has changed to what from what. Peter Childs ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Recomended FS
On Wed, 22 Oct 2003, Joshua D. Drake wrote: I believe that 3ware have a non blocking implementation of ATA RAID - I intend to sell the Promise and obtain a 3ware in the next month of so and test this out. I use 3Ware exclusively for my ATA-RAID solutions. The nice thing about them is that they are REAL hardware RAID and the use the SCSI layer within Linux so you address them as a standard SCSI device. Also their support is in the kernel... no wierd, experimental patching. On a Dual 2000 Athlon MP I was able to sustain 50MB/sec over large copys (4+ gigs). Very, Very happy with them. Do they survive the power plug pulling test I was talking about elsewhere in this thread? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] database files
On 22/10/2003 20:41 Gail Zacharias wrote: I need to be able to move the database files, as normal user-visible files, between machines. I.e. given a database on machine A, I want to be able to copy either a single file (ideally) or a single directory (less ideal but still ok) to, say, a zip drive, bring it over to another machine (with pgsql also installed), start up my application and have it access the copied database through pgsql. Is this sort of thing possible? Is a database stored in a single file or multiple files? Can the location of the file(s) be controlled? Are the files accessible and consistent while pgsql is running? I assume not all the time, but is there a reliable way to make them accessible (i.e. copyable) and consistent short of shutting down pgsql? Yes, multiple, yes up to a point, no, no. A possible mechanism would be: - suppose you want to copy data from server A to server B - server A is running - server B is stopped - checkpoint server A - rsync the files from server A to server B - stop the postmaster at A - rsync again (should not take much time) - start both postmasters Note that between both rsyncs the data in server B is not usable (i.e. it is corrupt). You _have_ to do the last rsync with A's postmaster stopped to make sure the files are right. Note that you have to copy the whole PGDATA, including pg_clog and pg_xlog. This means server B cannot have anything beyond what is on server A. You should probably discard the pg_dump route and erServer before trying to do this ... -- Alvaro Herrera (alvherre[a]dcc.uchile.cl) Major Fambrough: You wish to see the frontier? John Dunbar: Yes sir, before it's gone. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Trigger
On Thu, 23 Oct 2003, Peter Childs wrote: On Wed, 22 Oct 2003, scott.marlowe wrote: On Wed, 22 Oct 2003, Peter Childs wrote: Is it possible to deferr a trigger until commit, Or to have the trigger not occur if the transaction is rolled back? Like transaction. I think its possible since constraints use triggers and if so why is this a standard feature. Also is there anyway of seeing what triggers exsist and what they do? (psql \somthing or the like) A trigger inside a transaction should automagically roll back should the transaction fail, shouldn't it? Only if it only affects that database. If the trigger uses C to tell an outside app whats going on, it will not get the truth. Background, we are trying to get the database to tell clients when records get updated, deleted or inserted so that they can update there on-screen displays without having to query the database every couple of seconds which would put an unnessary strain on the database. Hence producing quicker respose times. It might be more efficient and transactionally safe to write it all to a temp table, and have a daemon suck that data out every now and then and put it into another database that the feeders can interrogate as often as they like. that way you still get the ease of programming a transaction that's all or nothing, and since the daemon only runs every minute or two and batches up its access, the impace of the batching should be nominal. Or would that introduce other problems of its own? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] database files
On Wed, 22 Oct 2003, Gail Zacharias wrote: I need to be able to move the database files, as normal user-visible files, between machines. I.e. given a database on machine A, I want to be able to copy either a single file (ideally) or a single directory (less ideal but still ok) to, say, a zip drive, bring it over to another machine (with pgsql also installed), start up my application and have it access the copied database through pgsql. While you think this is the preferred method, for postgresql is most certainly is not. what you need to do is read up a bit on pg_dump and how to use it to accomplish your goals. For instance, suppose I have two machines, A and B, and I want to copy the table accounts from the test database on A to B. Assuming that the test database exists, but the table accounts doesn't, I can do this (Note these are all command line programs, not psql): pg_dump -h A test -t accounts |psql -h B test Or, if I want to move a whole single database over: createdb -h B dbname pg_dump -h A dbname |psql -h B dbname (This assumes the database dbname didn't exist.) or, the biggie, assuming B is a freshly initdb'd database, and I want to move ALL the databases from A to B: pg_dumpall -h A|psql -h B Moving individual database files around is a certifiably Bad idea. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Setting up DSPACE for Postgres access
Looking at the dspace docs (system prerequisites): Then when PostgreSQL has compiled, copy the new postgresql.jar to dspace-1.1/lib. Strikes me as a bit strange it won't just use the classpath, but have you got the jar in there too? Yes and in j2sdk's lib directory and in pgsql's share directory and it is all being called in the classpath.. no luck, still.. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Setting up DSPACE for Postgres access
Agreed, however if its working fine on someone else's end then the problem is local to my environment. People do have difficulties setting up DSPACE but I currently have the front-end running fine. The dependencies are just crazy along with third party apps for DSPACE. However, all that is done, its working and the front-end is up. I have what most others dont and dont have what most others do. The jar part of it really gets me cause all it needs is the CLASSPATH env var pointing to the appropriate jar as I have done so many times in the past. I know Java is on ok cause I have Tomcat running on the server fine as well which is required for DSPACE as well. I just dont get why the program cannot find the jar var. The create-administrator script runs the script such that the call tends to be: java -Xmx256m -classpath $FULLPATH org.dspace.administer.CreateAdministrator Where, $FULLPATH=$CLASSPATH:$JARS:$DSPACEDIR/config Where $JARS=`echo $DSPACEDIR/lib/*.jar | sed 's/ /\:/g'` Now, the postgresql.jar gets pointed to twice; once from the script with the JARS var and the second with the pointing to it from the $CLASSPATH var set by me. I even echo the $FULLPATH var right before it hits the java command and the echo brings up the huge PATH created by the script that contains the jar files and the directories. I dont get however how the script can have it in the $FULLPATH var but not use it when sent to the command line? On Thu, 23 Oct 2003, Dardo D Kleiner wrote: Whenever I need to use the postgresql JDBC driver in my programs, I have to manually include the line: Class.forName(org.postgresql.Driver); somewhere *before* I call DriverManager.getConnection. Try this in your test program and see if it works. Unfortunately, you may not be able to modify the installer to support this, but there might be another way? HTH! -- Dardo D. Kleiner Connection Machine Facility, Center for Computational Sciences Naval Research Laboratory (Washington, DC) [EMAIL PROTECTED] -- 202.404.7019 --- The difference between genius and stupidity is that genius has its limits. - Albert Einstein Ashwin kutty.. Systems Administrator Dalhousie University Libraries (902) 494-2694 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] About TSearch2 Performance
First one will be a bit faster Diogo Biazus wrote: Hi, Is there any performance diference between the following SQL commands: SELECT * FROM documents WHERE content_ix @@ to_tsquery('word1word2|word3'); SELECT * FROM documents WHERE content_ix @@ to_tsquery('word1') AND content_ix @@ to_tsquery('word2') OR content_ix @@ to_tsquery('word3'); I'm having to do this on some complex querys to put LIKEs between some ts_querys. Does anyone has such experience? Thanks in advance, -- Teodor Sigaev E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Recomended FS
Its worth checking - isn't it ? I appeciate that you may have performed such tests previously - but as hardware and software evolve its often worth repeating such tests (goes away to do the suggested one tonight). Note that I am not trying to argue away the issue about write caching - it *has* to increase the risk of database corruption following a power failure, however if your backups are regular and reliable this may be a risk worth taking to achieve acceptable performance at a low price. regards Mark scott.marlowe wrote: Assuming that the caching was on, I'm betting your database won't survive a power plug pull in the middle of transactions like the test I put up above. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Simple SQL
Is it possible to do something like: select 1 as a, a*a; in postgres? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Clustering for performance and fail over
Hello all, I am interested in know if anyone has set up clustering for performance and fail over using PostgreSQL. We are currently using Oracle for a distribution application and would like to use PostgreSQL with multiple application and database servers. Regards Stan.Post your free ad now! Yahoo! Canada Personals
Re: [GENERAL] database files
[EMAIL PROTECTED] (Gail Zacharias) wrote: I am investigating the possibility of using pgsql as the database in an application. I have some unusual requirements that I'd like to ask you all about. I apologize in advance if my terminology is a little off, I'm not familiar with pgsql (yet). I need to be able to move the database files, as normal user-visible files, between machines. I.e. given a database on machine A, I want to be able to copy either a single file (ideally) or a single directory (less ideal but still ok) to, say, a zip drive, bring it over to another machine (with pgsql also installed), start up my application and have it access the copied database through pgsql. Is this sort of thing possible? Many things are possible. Not all are sensible. The approach you seem to want to take appears to fit into the not sensible category. Is a database stored in a single file or multiple files? Lots of files. Can the location of the file(s) be controlled? Yes, to a degree, either by fancy footwork when the database is shut down, or, in the case of specific data files, via how you create them. Are the files accessible and consistent while pgsql is running? Only if you have some sort of logical volume manager around that can copy a whole filesystem around atomically. I assume not all the time, but is there a reliable way to make them accessible (i.e. copyable) and consistent short of shutting down pgsql? If you are storing all of the data atop some logical volume manager system such as Veritas or Tru64 AdvFs or Linux LVM, then there is probably a way, but I'm not sure there is any equivalent on Windows, so it seems unlikely that this could be practical. Is the file format of the pgsql database files compatible between OS's? E.g. could I take some database files from Linux and use them on Windows? Not generally, no. It sounds as though the things you are trying to do are more or less the exact opposite of what is generally considered reasonable usage. If you're at clozure, you're doubtless aware of the notion of taking different approaches with different languages. Good Common Lisp code isn't written the same way as colloquial Scheme which doesn't look at all like colloquial C++ or Java. There is an approach to doing this that _would_ provide consistent copies, dumped into one file, of all of the data, that could indeed be loaded onto another system without need to shut the database down. Look at the documentation for pg_dump; that does what you _actually want_, albeit not in the way you are asking to do it. -- wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','ntlug.org'). http://www.ntlug.org/~cbbrowne/spiritual.html ... the most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language. -- D. E. Knuth, 1967 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] lastval(seq) ?
[EMAIL PROTECTED] (CSN) writes: For updating sequences after copy importing data (with id's). select setval('table_id_seq', (select max(id) from table)); vs. select setval('table_id_seq', (select last_value from table_id_seq)); Is there a transaction-safe way? There's not likely to be. For any given potential value of currval('table_id_seq'), it is always possible that a transaction could be held open that is using that value. The only really _safe_ way to reset sequences is to do so when there are no transactions active on the system. In practice, we have to live with that potential for lack of safety, and I would be inclined to set the value to the maximum visible value plus some reasonable constant, say 1000, on the assumption that unless someone is trying to do something actively pathologically bad, that should be good enough. But my preference would be to do so with applications that might be doing potentially-evil things SHUT DOWN. -- output = (cbbrowne @ libertyrms.info) http://dev6.int.libertyrms.com/ Christopher Browne (416) 646 3304 x124 (land) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Clustering for performance and fail over
Hi, Stan, We're in the early stages of testing a new Postgres (7.3) cluster. For background, our database is about 14gb on disk, and we see about a transaction a second (out of about 120 queries/sec.) Our application is a large dynamic Apache-based web system, written in Perl. Our main database machine is a quad P4 Xeon (1.8ghz) with 4gb of RAM, running Linux 2.4.mumble; poorly formed queries and bad disk layout (we're working on it) mean that during times of peak traffic we'd see load sometimes up over 15. For fail-over, we've been running the contrib/dbmirror single-master replication for about six months (in production) with no ill effects. We do reporting and db backup off of the slave machine, and it works great. However, we project steady, linear growth in usage, and thus needed to find extra performance -- and it's not easy to get a higher performing shared-memory multiprocessor, to say nothing of cost. As our system is pure Perl, we decided to replace the standard Perl database access layer with a custom, multiplexing, handle cache. It's been running for about a week now and distributing the load flawlessly. A bonus is that proxying the queries has allowed us to being to collect more interesting timing and usage statistics, and we're finally starting to hunt down and mercilessly improve our nastiest queries. There are some refinements to the dbmirror that we're currently working on, but for now, everything is working flawlessly. 'jfb C++: an octopus made by nailing extra legs onto a dog. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] plpgsql: return multiple result sets or temp table
On Wednesday 22 October 2003 01:19 pm, Jeff Eckermann wrote: --- Oksana Yasynska [EMAIL PROTECTED] wrote: Jeff, thank you for the time and suggestion. I'm also trying to use SETOF custom_type as a solution I don't believe that works in 7.2. But if you do make it work, be sure to tell me how ;-) That will work with 7.3, but I am assuming that, as your first message said, you cannot upgrade the server. Oups... Forgot to mention that we are upgrading to 7.3.. right now:) That's why custom type seems better solution. Cheers, Oksana ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Desperate help needed for Replication
El Jue 23 Oct 2003 19:41, Chris M. Gamble escribió: I am trying to perform what I best understand as Multi-master asynchronous replication for postgres 7.3.3 servers. After researching, I tried the pgReplication project (and made the simple modification as listed in their mailing lists), but was not able to build the environment needed for Redhat 7.3. My question is: is anyone doing this type of replication, and if you are not using pgReplicate, then what are you using? 1) I think the best shot should be eRServ http://gborg.postgresql.org/project/erserver/projdisplay.php 2) Upgrade your DB server to 7.3.4. Your actual version has a serious bug which can cause crashes of the system. -- select 'mmarques' || '@' || 'unl.edu.ar' AS email; - Martín Marqués |[EMAIL PROTECTED] Programador, Administrador, DBA | Centro de Telemática Universidad Nacional del Litoral - ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Desperate help needed for Replication
Isnt the erserv project limited to Read-only slave stations? When I looked at it, it did not seem to indicate that I could make an update on any of the postgres servers and have that update replicated to all others. It looked as though I could only update at 1 server. El Jue 23 Oct 2003 19:41, Chris M. Gamble escribió: I am trying to perform what I best understand as Multi-master asynchronous replication for postgres 7.3.3 servers. After researching, I tried the pgReplication project (and made the simple modification as listed in their mailing lists), but was not able to build the environment needed for Redhat 7.3. My question is: is anyone doing this type of replication, and if you are not using pgReplicate, then what are you using? 1) I think the best shot should be eRServ http://gborg.postgresql.org/project/erserver/projdisplay.php 2) Upgrade your DB server to 7.3.4. Your actual version has a serious bug which can cause crashes of the system. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Recomended FS
Mark Kirkwood wrote: Its worth checking - isn't it ? I appeciate that you may have performed such tests previously - but as hardware and software evolve its often worth repeating such tests (goes away to do the suggested one tonight). Note that I am not trying to argue away the issue about write caching - it *has* to increase the risk of database corruption following a power failure, however if your backups are regular and reliable this may be a risk worth taking to achieve acceptable performance at a low price. Sure, but how many people are taking that risk and not knowing it! -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Nullable 'Foreign Key-like' Constraint
Ron, I have done this by adding the attribute to the table with nulls allowed and adding a constraint to the table for the foreign key... works like a charm: CREATE TABLE TESTTYPE ( TESTTYPEKEY char(30) NOT NULL, TESTTYPENAME varchar(255) NULL, TESTTYPEDESC varchar(255) NULL, TESTTYPELABEL varchar(255) NULL, CONSTRAINT XPKTESTTYPE PRIMARY KEY (TESTTYPEKEY) ) ; CREATE TABLE TEST ( TESTKEY char(30) NOT NULL, TESTTYPEKEY char(30) NULL, CONSTRAINT LOG_PK PRIMARY KEY (TEST_PK), CONSTRAINT testtype_test FOREIGN KEY (TESTTYPEKEY) REFERENCES TESTTYPE ) ; Karen L. Grose Vigilos Inc. Karen L. Grose Vigilos Inc. 2030 First Avenue Suite 300 Seattle, WA 98121 206.728.6464 ext. 111 :Phone 206.728.6440:Fax 206.335-8386:Cell -Original Message- From: Ron [mailto:[EMAIL PROTECTED] Sent: Thursday, October 23, 2003 9:02 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Nullable 'Foreign Key-like' Constraint I posted this to 'questions' yesterday instead of 'general' by mistake. Sorry if anyone received duplicates. Mandatories: Ver 7.3.4, Redhat Linux 8.0, P4, 2GB RAM I want to add a 'nullable' foreign key to a column in a table. I have tables company and project which may be related by company.companyID - project.companyID. project.companyID is allowed to be null. However, when someone tries to delete a company which is still referenced in project I want a constraint restricting deletion. I tried: ALTER TABLE company ADD CONSTRAINT company_is_ta CHECK (companyID IN (SELECT companyID FROM project)); and I receive: ERROR: cannot use subselect in CHECK constraint expression Then I came across this previous post which showed how to set it up when the table is created. I tried it and it works for a new table, but I can't get it to work with existing tables. 1) My attempt: ALTER TABLE project ALTER COLUMN companyID SET DEFAULT NULL; ALTER TABLE project ADD CONSTRAINT company_is_ta companyID REFERENCES company(companyID); (plus variations on the above, resulting in errors, all similar to:) ERROR: parser: parse error at or near companyID at character 53 2) based on this previous posting: From: Manfred Koizar ([EMAIL PROTECTED]) Subject: Re: NULL Foreign Key Newsgroups:comp.databases.postgresql.general, comp.databases.postgresql.questions Date: 2002-07-17 05:51:19 PST On Tue, 16 Jul 2002 17:10:32 -0700, Kuhn, Dylan K (NDTI) [EMAIL PROTECTED] wrote: Can I make a foreign key that is allowed to be NULL? Yes: fred=# CREATE TABLE father (i INT PRIMARY KEY); NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'father_pkey' for table 'father' CREATE fred=# CREATE TABLE son (i INT REFERENCES father); NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s) CREATE fred=# INSERT INTO father VALUES (1); INSERT 183317 1 fred=# INSERT INTO son VALUES (1); INSERT 183318 1 fred=# INSERT INTO son VALUES (2); ERROR: unnamed referential integrity violation - key referenced from son not found in father fred=# INSERT INTO son VALUES (NULL); INSERT 183320 1 Servus Manfred Anyone know how I can get this to work? BTW I don't want to use 'ignore' rules when someone attempts to delete the company as I want the constraint message to be shown in the app's browser. TIA Ron ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly