[GENERAL] Problem with GIST-index and timestamps
Hello, i wanted to define and GIST-index on a table with a timestamp-column containing 'infinity' values, and it throws the following error: ERROR: cannot subtract infinite timestamps Is this a known limitation? How can i avoid or circumvent this? Thanks in advance Sebastian ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Thanks for insights into internal design
You give me valuable insight into the inner workings of such software. I am a firm believer in testing everything with very large files. One might spend months developing something, and have it in production for a year, and not realize what will happen when their files (tables) grow to several million records (rows). And it take so little effort to create large test files. A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] wrote: I must say one intereting thing. When I downloaded the trial version from TheKompany, and asked it to browse a test file in PGSql which I loaded with 250,000 rows, it started to read them, and read for a long long time (as MSAccess does), but the seemed to get stuck, whereas MSAccess is able to browse the entire file. I must experiment more with the demo version from theKompany, and also with this free version from the site you gave me.This sort of problem is characteristic of the use of "array" objectsin graphical toolkits.Suppose you're populating something with 250K rows, perhaps with adozen fields per row. In such a case, the toolkit is slinging around3-4 million objects, and having to evaluate which of them are visibleon screen at any given time._Any_ kind of inefficiency in the library, or in the use of thelibrary, can easily lead to rendering turning out really, reallybadly.The X Window system has gotten heavily criticized for speed problems,commonly with respect to how Mozilla used to work when rendering largeweb pages. Reality was that Mozilla was implemented (this is nolonger true, by the way) atop a platform-independent library calledRogue Wave which then had a mapping to Motif (which is noted as NotEveryone's Favorite Graphics Library ;-)) which then rendered thingsusing X. The True Problem lay somewhere in that set of layers and,since several of the layers were pretty inscrutable, it wasessentially impractical to address the performance problem.Much the same thing took place with the Tcl/Tk application, "cbb"(Check Book Balancer); the Tk 'array' object got to behaveincreasingly badly with increasing thousands of rows. And changingone transaction near the top of an account would lead to cascadingbalance updates, therefore walking (linear fashion, more than likelyleading to superlinear resource consumption :-() through the rest ofthe transactions to update every single balance...Gigahertz, Gigabytes, and upgrades may overcome that, to some degree,but it wouldn't be overly surprising if you were hitting some suchunfortunate case. It might represent something fixed in a laterrelease of Rekall; it could represent something thorny to resolve.I would really hate the notion of depending on a GUI to managemillions of objects in this manner; it is just so easy for it to gobadly.-- "cbbrowne","@","gmail.com"http://linuxdatabases.info/info/nonrdbms.htmlRules of the Evil Overlord #10. "I will not interrogate my enemies inthe inner sanctum -- a small hotel well outside my borders will workjust as well." http://www.eviloverlord.com/---(end of broadcast)---TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is
On Wed, 2005-04-27 at 17:00 +0200, Stephane Bortzmeyer wrote: On Wed, Apr 27, 2005 at 09:36:57AM -0500, Scott Marlowe [EMAIL PROTECTED] wrote a message of 18 lines which said: Often the best bet here, btw, is to declare it not null then use something other than null to represent null, like the text characters NA or something. Yes, but it defeats the purpose of NULL. And what should I use as a pseudo-NULL value for INET? 127.0.0.1? 0.0.0.0? Special values are well-known for the problems they raise. That's why many languages have NULL-like solutions (None in Python, undef in Perl, Maybe types in Haskell, etc). No. NULL is NOT 'None', nor 'undef', and definitely not NULL as in C. Those are perfectly defined values, although special ones. Only 'undef' is quite misleading, but nevertheless it is just _one_ value. That is, given a variable A, you can always write a boolean expression that evaluates True or False to test if A is _equal_ to None/undef/NULL (in C): $ python -c a = None; print a == None True $ perl -e 'print a == undef, \n' 1 $ cat p.c #include stdio.h int main(int argc, char *argv[]) { char *a = NULL; printf (%d\n, a == NULL); } $ cc p.c $ ./a.out 1 About Haskell, I don't know. For what I understand from quick reading the manual, it'd say that Nothing is similar. You _can_ tell if something is equal to Nothing. In databases, NULL has a completely different meaning. It doesn't mean _no value_, which is just a special (single) value, but it means _unknown value_. You simply can't compare it with any single value (even special ones) and expect a boolean answer. The only possible answer is 'I don't know', which is NULL in boolean. Notice that the boolean NULL is _not_ the same of False. So you get: marco=# select 2 = 2; ?column? -- t (1 row) marco=# select 2 = 3; ?column? -- f (1 row) marco=# select 2 = NULL; ?column? -- (1 row) that is, neither true nor false. Back to your example, you can compare ('a', 2) with ('a', 2), the result is 't' and thus you've managed to identify the right row (it works as a primary key). Also, ('a', 3') is different from ('a', '2'), so you can tell the two rows are different. But what if you allow ('a', NULL)? ('a', NULL) is neither the same _nor different_ from ('a', 2). The result of comparison is NULL, no matter how you're testing it: marco=# select ('a', 2) = ('a', NULL); ?column? -- (1 row) marco=# select ('a', 2) ('a', NULL); ?column? -- (1 row) see? _Neither_ one is true. This would completely defeat the purpose of the primary key. And of course, comparing ('a', NULL) with ('a', NULL) results in exactly the same: marco=# select ('a', NULL) = ('a', NULL); ?column? -- (1 row) marco=# select ('a', NULL) ('a', NULL); ?column? -- (1 row) That's why NULLs are not allowed in primary keys. The key simply won't work. NULL in databases is not _one_ special value. It's _any_ value, since it's unknown. The boolean expression: 2 = NULL might be true or might be false, since NULL could be _any_ integer in this expression. This is completely different from the semantic of None/undef/NULL in most programming languages. You wrote: Special values are well-known for the problems they raise. then NULL is definitely _not_ the value you're looking for. Everything can be said of NULL, but that it is well-known. In your case, by choosing (name, address) as the primary key, you're saying 'I need to know both the name and the address to be able to retrieve a datum in the table'. This implies that if you have partial knowledge (you don't know the address), you can't naturally retrieve a single datum (or insert it). Depending on what you're trying to achieve, you may need to split the table (normalization the theorists call it). I don't like theory much, but its conclusions sometimes just make a lot of sense. :-) Review your design, maybe either the table schema or the choice of the primary key is not natural for your database. .TM. -- / / / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _/ _/ _/ [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Reduce size of $PGDATA for demo cdrom?
On Wed, 2005-04-27 at 20:44, Bruno Wolff III wrote: On Wed, Apr 27, 2005 at 10:23:19 -0500, Scott Marlowe [EMAIL PROTECTED] wrote: My first recommendation would be to put everything into one database. it looks like you've got 6 databases. If you've still got the template0 database, you can probably get rid of that one as well. If you're not going to need to create any new databases you can maybe drop template1 too. Isn't template0 needed for dumps, since they are relative to template0? No, it's template1, and you only need it for pg_dumpall. pg_dump should still work. Note that without template1, things like psql -l will fail. But you can still connect to the one database you know the name of. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Help
Hi, I am using Redhat linux 9. At the time of instllation i omitted the option of SQL server. AFter installing Linux 9 i started installing postgresql 7.4.5. After installing i am unable to enter into the path root$ su postgres su: user postgres does not exist Note: At the time of installing itself i get problem while setting permission as rootchown postgres.postgres /usr/local/pgsql/ chown: `postgres.postgres': invalid user Even if i tried the postgres --version i am getting output as root$ postgres --version postgres (PostgreSQL) 7.4.5 Please help me to enter into $ su postgres -- Warm Regards, S.ElayaRaja Mobile: (+91) 98450 59540 E-Mail: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is NOT
On Thu, Apr 28, 2005 at 02:04:29PM +0200, Marco Colombo [EMAIL PROTECTED] wrote a message of 146 lines which said: No. NULL is NOT 'None', nor 'undef', and definitely not NULL as in C. Thanks for the very good and detailed explanation of NULL in SQL. Curious people may note that the strange semantics of NULL are heavily criticized in C. J. Date and Hugh Darwen A Guide to the SQL Standard (Addison-Wesley) [Side note: I was perfectly aware for NULL in C, where it is just an ordinary zero, that's why I only mentioned Perl and Python.] marco=# select 2 = NULL; ?column? -- (1 row) Even better, you can write; registry= select NULL = NULL; ?column? -- (1 row) Depending on what you're trying to achieve, you may need to split the table (normalization the theorists call it). Yes, I noticed in similar organizations that the Hosts table was split in one table for names and one for addresses, may be for exactly that reason. Review your design, maybe either the table schema or the choice of the primary key is not natural for your database. At the present time, it seems that, practically speaking, the technique proposed by Sebastian Böck (two index) is perfectly fine. This technique can be summarized as PostgreSQL, I tell you to treat all NULL addresses as being the same value. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Help
ElayaRaja S [EMAIL PROTECTED] writes: Note: At the time of installing itself i get problem while setting permission as rootchown postgres.postgres /usr/local/pgsql/ chown: `postgres.postgres': invalid user Some versions of chown only support the notation user:group not user.group --- looks like yours is one. I suspect also that you failed to create the postgres user account properly. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Help
You can use the adduser command to create the postgres user on your linux box. I use Uid:26 Gid:100, but I don't know that it matters. Woody -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of ElayaRaja S Sent: Thursday, April 28, 2005 9:19 AM To: pgsql-general@postgresql.org Subject: [GENERAL] Help Hi, I am using Redhat linux 9. At the time of instllation i omitted the option of SQL server. AFter installing Linux 9 i started installing postgresql 7.4.5. After installing i am unable to enter into the path root$ su postgres su: user postgres does not exist Note: At the time of installing itself i get problem while setting permission as rootchown postgres.postgres /usr/local/pgsql/ chown: `postgres.postgres': invalid user Even if i tried the postgres --version i am getting output as root$ postgres --version postgres (PostgreSQL) 7.4.5 Please help me to enter into $ su postgres -- Warm Regards, S.ElayaRaja Mobile: (+91) 98450 59540 E-Mail: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Visual Query Builder
We use EMS PostgreSQL Manager 2. Greetings, Ariel. On 4/27/05, Hrishikesh Deshmukh [EMAIL PROTECTED] wrote: Hi All, I have DB in Postgresql; is there a visual query builder? Thanks in advance. Hrishi ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- Ariel Mónaco - IT-Consultant ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Help
After login as su postgres I am unable to create the db; $ createdb -E utf8 -U system myDB; ERROR: permission denied to create database createdb: database creation failed: ERROR: permission denied to create database -- Warm Regards, S.ElayaRaja Mobile: (+91) 98450 59540 E-Mail: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Clarification
Hi, I am using Redhat linux 9 with postgreSQL 7.4.5. It contain 3 database such as postgres, template0 template1. If i want to create mydb where i have to create. Either i have to login as us postgres , and create mydb or under psql template1 and create mydb or under psql template0 and create mydb -- Warm Regards, S.ElayaRaja Mobile: (+91) 98450 59540 E-Mail: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Thanks for insights into internal design
That's fine for a system like access or DBASE, but you should never be making queries that large for a production application. Access or DBASE or any other local FILE based system will not have any problems bringing back 1 million records because it does not have to bring the records across the wire via TCP/IP. You should alway limit queries by a date range or at least implement a paging system. 250,000 to 1 million rows is also going to suck up a huge amount of system memory on the client side. It does not seem like you are really catching on to the concept of a client/server based system. It does not matter if there is a billion rows because you should NEVER be letting a end user bring back the full amount anyway. Think about it. Postgresql is not a local file based system like Access or Dbase, you can't use the same testing methods or you will be in for a world of hurt. You give me valuable insight into the inner workings of such software. I am a firm believer in testing everything with very large files. One might spend months developing something, and have it in production for a year, and not realize what will happen when their files (tables) grow to several million records (rows). And it take so little effort to create large test files. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Help
On Thu, 2005-04-28 at 09:51, ElayaRaja S wrote: After login as su postgres I am unable to create the db; $ createdb -E utf8 -U system myDB; ERROR: permission denied to create database createdb: database creation failed: ERROR: permission denied to create database What's with the -U system in there? As the postgres user, (i.e. either leave out the -U system or make it -U postgres) it should work fine. Oh, and you MAY want to just stick to lower case names for databases, tables, fields etc... It's easier in the long run. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Is there a bug in the win32 pg_restore (version 8.02) ?
Hi, I just tried to restore a db with the win32 pg_restore that shipped with 8.02 and I get to the part where it prompts me for the password, and I enter it, then I get the following error: C:\pg_restore.exe --ignore-version --verbose --dbname=mandi_test -W -U postgres --port=5432 --host=192.168.1.23 C:\Documents and Settings\Administrator\Desktop \mandi.bak pg_restore: connecting to database for restore Password: pg_restore: [archiver (db)] connection to database mandi_test failed: FATAL: password authentication failed for user postgres pg_restore: *** aborted because of error I have the pg_hba.conf for my host set to md5, and if I change it to trust then I can do the restore. update: I just tried the pg_restore that has a version number of 8.0 and it works, definately seems to be a bug in the 8.02 version (win32) -- Tony Caduto AM Software Design Home of PG Lightning Admin for Postgresql 8.x http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each column is
Marco Colombo wrote: That is, given a variable A, you can always write a boolean expression that evaluates True or False to test if A is _equal_ to None/undef/NULL (in C): And of course you can do this in SQL as well. ('a', NULL) is neither the same _nor different_ from ('a', 2). Uh, I'm not sure what you mean by the same; but certainly ('a',NULL) is distinct from ('a',2). The result of comparison is NULL, no matter how you're testing it: Unless you're testing with the IS DISTINCT FROM operator. fli=# select ('a',NULL) IS DISTINCT FROM ('a',2); ?column? -- t (1 row) marco=# select ('a', 2) = ('a', NULL); ?column? -- (1 row) marco=# select ('a', 2) ('a', NULL); ?column? -- (1 row) fli=# select ('a',NULL) IS DISTINCT FROM ('a',NULL); ?column? -- f (1 row) That's why NULLs are not allowed in primary keys... But in general I agree with most of the rest of what you said. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] ERROR: Could not convert UTF-8 to ISO8859-1
Hello, my database encodes data as utf8. The data I want to select contains characters like äöü. If I select some columns and change the encoding to latin1, then I get the following error message: select created, keywords from requests; ERROR: Could not convert UTF-8 to ISO8859-1 If I set the client encoding to utf8 by entering \encoding utf8, then the same select will print all the columns bu display à for ü ö for ö and mess up other special characters. The version of my postgres installation is 7.2.1 on woody/debian. Why does postgresql issue the error message ERROR: Could not convert UTF-8 to ISO8859-1, when I execute a select statement after I changed the client encoding to latin1 ??? fritz ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] temp tables ORACLE/PGSQL
Hi I used to work with Oracle and now tryin' PostgreSQL I'm a bit confused. I found that creating temp table in one session does not make it available for other sessions for the same user? Is this intended?? I was tryin to use because of lack of session and package variables in PGSQL (thats what I know). But I'm not sure if I have to create a temp table at the beginning of each session? Or just like in Oracle create temp table definition and all whats temporar is data. If creating a temp table for each connection is a must then maybe You can tell me if there is any triger /event that is being called during connecting to db. I guess that would be the best place to create temp tables and feed it up with session parameters. Regards fisher ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Linux Journal Users' Choice Award Nominations
Linux Journal has a new process for their Readers' Choice awards. Before there is a vote, they are soliciting nominations for each category. Say (for example) you wanted to see PostgreSQL included among the databases up for vote you would send an email to: [EMAIL PROTECTED] and include the following line in the body of the message: database: PostgreSQL See the full announcement at http://www.linuxjournal.com/article/8192 Cheers, Steve ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] ERROR: Could not convert UTF-8 to ISO8859-1
Why does postgresql issue the error message ERROR: Could not convert UTF-8 to ISO8859-1, when I execute a select statement after I changed the client encoding to latin1 ??? Because every UTF8 character cannot necessarily be encoded as Latin1. Unicode has more than 64, characters, Latin1 has less than 256. Check your database for non-Latin1 characaters. If I set the client encoding to utf8 by entering \encoding utf8, then the same select will print all the columns bu display for for and mess up other special characters. This is because your display (xterm, whatever) is set to Latin1, presumably. There are at least three different encoding settings here: how your data is stored in the database, what the server sends to psql, and what your display thinks its getting. These must all be consistent with one another. - John D. Burger MITRE ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] temp tables ORACLE/PGSQL
Hi I used to work with Oracle and now tryin' PostgreSQL I'm a bit confused. I found that creating temp table in one session does not make it available for other sessions for the same user? Is this intended?? Yes, it's natural behave of temp. tables in PostgreSQL. The life cycle of temp tables is related with session. When session ends then all temp tables are destroyed. When you wont to use temp tables again, you have to create it again. I was tryin to use because of lack of session and package variables in PGSQL (thats what I know). But I'm not sure if I have to create a temp table at the beginning of each session? Or just like in Oracle create temp table definition and all whats temporar is data. You will lost table definition. If creating a temp table for each connection is a must then maybe You can tell me if there is any triger /event that is being called during connecting to db. I guess that would be the best place to create temp tables and feed it up with session parameters. No there are not table or session triggers. You have to create temp tables from application. Regards Pavel Stehule ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] temp tables ORACLE/PGSQL
NO-fisher-SPAM_PLEASE wrote: Hi I used to work with Oracle and now tryin' PostgreSQL I'm a bit confused. I found that creating temp table in one session does not make it available for other sessions for the same user? Is this intended?? PostgreSQL does not support global temporary tables. This is one of the most painful features missing as far as porting from Oracle goes from my standpoint. Yes, you need to create the temporary table at the beginning of each session. Also, stored procedures that use temporary tables are more painful to write - you need to use EXECUTE for any SQL that references a temporary table - read the Porting From Oracle section of the PostgreSQL manual. I'd recommend rereading it several times. The other option with temporary tables is to emulate a global temporary table using a normal table and adding a column like this: session_id INTEGER DEFAULT pg_backend_pid() NOT NULL and then modifying your select/update/delete statements to include where session_id = pg_backend_pid() so that you only deal with the data from your current session. The pg_backend_pid() guaranteed to be unique while connected. You'll just want to make sure you have a process for deleting rows from the table so if you get a pg_backend_pid() again you won't have problems. This has the advantage of not having to create a temporary table at the beginning of every session, plus your stored procedures don't need to use EXECUTE. The disadvantage is, you'll have to have some process for deleting old data from the table, as it will stay around and it will bite you when you get the same pg_backend_pid() again down the road. Dennis ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] OT: phpPgAdmin
Hi I apologise for this off-topic question. I am setting up phpPgAdmin on a reporting server to access other, remote postgres servers. I want to use a restricted, password user account and use pg_hba.conf to restrict access to each database. For the user account, what level of permissions will I need? I have tried it out with 'SELECT', on all tables/views, being granted to the user, but when I try and look at processes, I get a list of insufficient privilege values when trying to look at the SQL. Any ideas? Thanks Simon -- Simon Windsor Email: [EMAIL PROTECTED] Tel: 01454 617689 Mob: 07960 321599 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] temp tables ORACLE/PGSQL
This is not entirely correct. We use temp tables all the time in PLpgsql functions and never have to use execute. We have found that you have to use EXECUTE only in certain circumstances. stored procedures that use temporary tables are more painful to write - you need to use EXECUTE for any SQL that references a temporary table - read the Porting From Oracle section of the PostgreSQL manual. I'd recommend rereading it several times. we use this in all our functions that use temp tables, and we use PG Lightning Admin, which refreshes the connection each time you run a query which gets around a lot of issues with temp tables. CREATE or REPLACE FUNCTION public.iftableexists( varchar) RETURNS pg_catalog.bool AS $BODY$ DECLARE BEGIN /* check the table exist in database and is visible*/ perform n.nspname ,c.relname FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace where n.nspname like 'pg_temp_%' AND pg_catalog.pg_table_is_visible(c.oid) AND Upper(relname) = Upper($1); IF FOUND THEN RETURN TRUE; ELSE RETURN FALSE; END IF; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- Tony Caduto AM Software Design Home of PG Lightning Admin for Postgresql 8.x http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] temp tables ORACLE/PGSQL
On Thu, Apr 28, 2005 at 01:14:58PM -0500, Tony Caduto wrote: This is not entirely correct. We use temp tables all the time in PLpgsql functions and never have to use execute. We have found that you have to use EXECUTE only in certain circumstances. we use this in all our functions that use temp tables, and we use PG Lightning Admin, which refreshes the connection each time you run a query which gets around a lot of issues with temp tables. I'm assuming that by refresh you mean that you close the connection to the database and create a new one for every query? If you do that each time you run a query, doesn't that make temporary tables pretty much worthless for anything other than PL scratch space? And it's obviously a ludicrous thing to do in almost all production cases, so if you're using lightning admin to prototype queries for production use aren't you going to get burned by the entirely different behaviour? Cheers, Steve ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Returning a RECORD, not SETOF RECORD
On Fri, Apr 22, 2005 at 12:24:26AM +0200, Thomas Hallgren wrote: CREATE FUNCTION xyz(int, int) RETURNS RECORD AS '...' CREATE TABLE abc(a int, b int); Now I want to call my xyz function once for each row in abc and I want my RECORD to be (x int, y int, z timestamptz). How do I write that query? I.e. where do specify my RECORD definition? Is it possible at all? Ideally I'd like to write something like this: SELECT xyz(a, b) AS (x int, y int, z timestamptz) FROM abc; but that yields a syntax error. What version of PostgreSQL are you using, and could the function return a specific composite type instead of RECORD? The following works in 8.0.2: CREATE TYPE xyztype AS ( x integer, y integer, z timestamp with time zone ); CREATE FUNCTION xyz(arg1 integer, arg2 integer) RETURNS xyztype AS $$ DECLARE rec xyztype; BEGIN rec.x := arg1 + 5; rec.y := arg2 + 5; rec.z := timeofday(); RETURN rec; END; $$ LANGUAGE plpgsql VOLATILE; CREATE TABLE abc ( a integer, b integer ); INSERT INTO abc (a, b) VALUES (10, 20); INSERT INTO abc (a, b) VALUES (30, 40); SELECT *, (xyz(a, b)).* FROM abc; a | b | x | y | z ++++--- 10 | 20 | 15 | 25 | 2005-04-28 12:47:03.762354-06 30 | 40 | 35 | 45 | 2005-04-28 12:47:03.762812-06 (2 rows) SELECT z, y, x, b, a FROM (SELECT *, (xyz(a, b)).* FROM abc) AS s; z | y | x | b | a ---++++ 2005-04-28 12:47:17.953952-06 | 25 | 15 | 20 | 10 2005-04-28 12:47:17.954543-06 | 45 | 35 | 40 | 30 (2 rows) -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Problem with GIST-index and timestamps
On Thu, 28 Apr 2005, [ISO-8859-1] Sebastian B?ck wrote: Hello, i wanted to define and GIST-index on a table with a timestamp-column containing 'infinity' values, and it throws the following error: ERROR: cannot subtract infinite timestamps Is this a known limitation? I don't know. How can i avoid or circumvent this? partial index should works Thanks in advance Sebastian ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Clustering
I haven't found any information on clustering with PostgreSQL. One idea we've been tossing around is through PostgreSQL you can create a function that does something when something else happens. Pseudo code: When databse xyz table companyname is updated update the same table and rows in database abc; So each time an update is made, the server performs the function. So every update is essentially 1xN updates (N is the number of servers), but is all done at the server level so should be fast. Now the problem with that solution is if a server db/table gets updated and when it is sending that same update over the net to the offsite cluster, the server crashes. That update technically went through, however the record of it is lost. What is a good clustering technique for PostgreSQL? ~PatHaugen ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] temp tables ORACLE/PGSQL
Tony Caduto wrote: This is not entirely correct. We use temp tables all the time in PLpgsql functions and never have to use execute. We have found that you have to use EXECUTE only in certain circumstances. we use this in all our functions that use temp tables, and we use PG Lightning Admin, which refreshes the connection each time you run a query which gets around a lot of issues with temp tables. If you refresh the connection each time you run a query, maybe you don't need to use EXECUTE with temporary tables in stored procedures, but who does that in a production database application? Most people want to re-use connections for performance reasons. Dennis ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Clustering
On Thu, 2005-04-28 at 13:59, Patrick Haugen wrote: I haven't found any information on clustering with PostgreSQL. One idea we've been tossing around is through PostgreSQL you can create a function that does something when something else happens. Pseudo code: When databse xyz table companyname is updated update the same table and rows in database abc; So each time an update is made, the server performs the function. So every update is essentially 1xN updates (N is the number of servers), but is all done at the server level so should be fast. Now the problem with that solution is if a server db/table gets updated and when it is sending that same update over the net to the offsite cluster, the server crashes. That update technically went through, however the record of it is lost. What is a good clustering technique for PostgreSQL? Something that pretty much does that with the help of a couple of external daemons is the slony project. http://slony.info ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] on insert rule primary key
Problem solved. Hacking away 'til the wee hours yielded a solution using an ON UPDATE rule, adding a row to a new table. Successful test sample follows, for anyone interested. Scott CREATE TABLE colors ( clrs_pkey SERIALPRIMARY KEY, first_nametext UNIQUE DEFAULT NULL, fav_color text DEFAULT NULL ); CREATE TABLE mono ( mono_pkey SERIALPRIMARY KEY, clrs_pkey integer REFERENCES colors, monochrometext DEFAULT NULL ); CREATE RULE mono_rule AS ON UPDATE TO colors WHERE NEW.fav_color = 'blanco' OR NEW.fav_color = 'negro' DO INSERT INTO mono (clrs_pkey, monochrome) VALUES (NEW.clrs_pkey, 'mono') ; INSERT INTO colors (first_name, fav_color) VALUES ('carmen', 'verde'); INSERT INTO colors (first_name, fav_color) VALUES ('carlos', 'amarillo'); INSERT INTO colors (first_name, fav_color) VALUES ('rocio', 'rojo'); INSERT INTO colors (first_name, fav_color) VALUES ('miranda', 'rosa'); UPDATE ONLY colors SET fav_color = 'blanco' WHERE clrs_pkey = 1; UPDATE ONLY colors SET fav_color = 'negro' WHERE clrs_pkey = 3; test= SELECT * FROM mono; mono_pkey | clrs_pkey | monochrome ---+---+ 1 | 1 | mono 2 | 3 | mono (2 rows) On Apr 27, 2005, at 1:20 PM, Scott Frankel wrote: I am trying to construct a rule that performs an UPDATE if specific conditions are met in an INSERT statement. Limiting UPDATE's SET action to just the new row by testing for the new primary key is failing for some reason. Yet if I eliminate the test, all rows in the table are updated. The actual rule I'm building must handle several OR clauses in its conditional test, so I've included that in the following sample. The output I would've expected would have both the Carlos and Miranda inserts yielding their favorite color, azul. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Clustering
On Thu, Apr 28, 2005 at 11:59:59AM -0700, Patrick Haugen wrote: Pseudo code: When databse xyz table companyname is updated update the same table and rows in database abc; ... What is a good clustering technique for PostgreSQL? Slony-I. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Returning a RECORD, not SETOF RECORD
Michael, Thanks for your reply on this. CREATE FUNCTION xyz(int, int) RETURNS RECORD AS '...' CREATE TABLE abc(a int, b int); Now I want to call my xyz function once for each row in abc and I want my RECORD to be (x int, y int, z timestamptz). How do I write that query? I.e. where do specify my RECORD definition? Is it possible at all? Ideally I'd like to write something like this: SELECT xyz(a, b) AS (x int, y int, z timestamptz) FROM abc; but that yields a syntax error. What version of PostgreSQL are you using The latest and greatest from CVS. , and could the function return a specific composite type instead of RECORD? This is not for a specific use-case. I want to provide rich functionality in PL/Java but I don't understand how the actual RECORD type is determined in cases where you don't use the function in a FROM clause where it only makes sense (to me at least) to use a function returning SETOF RECORD. Wouldn't it make sense to be able to define a record in the projection part of a query, similar to what I was attempting with my SELECT? Has this been discussed or is it just considered as not very useful? Regards, Thomas Hallgren ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Clustering
Karsten Hilbert wrote: On Thu, Apr 28, 2005 at 11:59:59AM -0700, Patrick Haugen wrote: Pseudo code: When databse xyz table companyname is updated update the same table and rows in database abc; ... What is a good clustering technique for PostgreSQL? Slony-I. Slony-I isn't clustering. They would have to look at pgCluster or something like that. Sincerely, Joshua D. Drake Karsten -- Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240 PostgreSQL Replication, Consulting, Custom Programming, 24x7 support Managed Services, Shared and Dedication Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] temp tables ORACLE/PGSQL
We only do the connection refesh in the Lightning Admin Query editorfor testing our SQL that uses temp tables. refreshing the connection eliminates the OID does not exist problems. We put everything into stored procs and use them from Delphi applications and still never use execute in our procs. I think the key is we use ON COMMIT DELETE ROWS when we create our temp tables, we don't ever drop them until the client disconnects. using the ON COMMIT DELETE ROWS just re uses the the same temp table over and over again. You only need to use select if you drop the temp table and recreate it multiple times in the same session. Here is a example of how we use temp tables: (NOTE: I ADDED the temp table to this function, you don't actually need it in this case becuase the cursor can just be refereneced against the select statement) This can be called over and over again from the same connection because the temp table is not dropped, it's just re-used. when the client does disconnect it gets dropped. You could also truncate the temp table at the end of the function if you wanted, then you don't have data sitting in the table until the next function call. CREATE or REPLACE FUNCTION admin.spadm_get_status_list() RETURNS pg_catalog.refcursor AS $BODY$ DECLARE return_cursor refcursor; BEGIN return_cursor = 'return_cursor'; IF iftableexists('temp_get_status_list') THEN RAISE NOTICE 'temp table already exists'; ELSE CREATE TEMP TABLE temp_get_status_list ( STATUS_ID SMALLINT, DESCRIPTION VARCHAR(50))WITHOUT OIDS ON COMMIT DELETE ROWS; END IF; INSERT INTO temp_get_status_list ( STATUS_ID, DESCRIPTION ) SELECTstatus_id, description FROMadmin.admin_status ORDER BY 1; OPEN return_cursor FOR SELECT * FROM temp_get_status_list; RETURN return_cursor; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE; If you refresh the connection each time you run a query, maybe you don't need to use EXECUTE with temporary tables in stored procedures, but who does that in a production database application? Most people want to re-use connections for performance reasons. Dennis ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- Tony Caduto AM Software Design Home of PG Lightning Admin for Postgresql 8.x http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] info on strange error messages on postgresql
Hello, I have these messages on my 7.4.7 database log... TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61 chunks); 87315304 used TopTransactionContext: 57344 total in 3 blocks; 648 free (5 chunks); 56696 used DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used MessageContext: 8192 total in 1 blocks; 7248 free (1 chunks); 944 used PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used PortalHeapMemory: 3072 total in 2 blocks; 1136 free (13 chunks); 1936 used CacheMemoryContext: 421519360 total in 60 blocks; 1116168 free (769 chunks); 420403192 used\\ followed by about 16GB of the following type of entries... pg_temp_1486707494: 2048 total in 1 blocks; 768 free (0 chunks); 1280 used Firstare these 2 sets of messages related and second what would cause either or both sets of messages to appear in the log? Appreciate any help :) -Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Clustering
On Thu, 2005-04-28 at 14:54, Joshua D. Drake wrote: Karsten Hilbert wrote: On Thu, Apr 28, 2005 at 11:59:59AM -0700, Patrick Haugen wrote: Pseudo code: When databse xyz table companyname is updated update the same table and rows in database abc; ... What is a good clustering technique for PostgreSQL? Slony-I. Slony-I isn't clustering. They would have to look at pgCluster or something like that. True. But the description of what the OP wanted to do, slony sounded like what he was after. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] GUITools update
I am starting work on a comparison of all the Postgresql design administration tools. The problem: Many Postgresql-compatible administration/design tools exist, but we lack a full comparison and review of each. This impedes migration and slows down the development process (time spent researching tools instead of doing DB work). Devs hackers would have an easier time if they could read over a guide, and know from the start which tool will meet their own requirements. So, I want to remedy that problem. Before I analyze these tools, I would appreciate input from the community on what sort of information is important for Postgresql users to know about these designadministration tools. Sample list (just a few from my own) operating system: OSS SQL Syntax checking price User permissions management visual relationship modeling I prepared a list of things myself, but was worried about incompleteness. If you send back an email with the criteria that you find important, it would be much appreciated. regards, Brennan age on #postgresql irc.freenode.net ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Clustering
Slony-I isn't clustering. They would have to look at pgCluster or something like that. Sure but it seemed to fit their description of what they wanted to do. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] oid or schema name of current plpgsql function
Is there a way to know the OID of the current plpgsql function from inside the function? What I really need is to extract the name of the schema where the function is stored. -- matko ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Postgresql and VBA vs Python
Tony, thanks. Your posts are very helpful to me. I never said anything about designing some application that depends upon bringing over a million rows. I just want to play around with Postgresql on windows, at home, and see what happens if I load in a million rows. Supposedly, from my readings, there are PG databases with 4 terabytes. They must have a lot of rows in some of those tables. I just want to see if Postgresql will function ok when I load something of some size. As for Rekall and Access, they offer an out of the box table browse when you click on a table. But, it will blow up if the table has 250,000 rows, in the case of Rekall. I am just curious to see what happens when I do certain things, and my curiosity paid off, because I learned something about a design limitation inRekall. If I were designing some kind of application for real life, of course I would want to limit the amount of info that a user brings across in a query, and not give them a million rows in a combobox. Give me some credit. I am not an utter moron. But there are in theory databases with tables that have millions of rows, and in theory, one must occasionally issue an SQL command to be performed on the server side that will process all those rows. I am certain that Postgresql is rock solid on Unix/Linux, but who knows what the Windows version is like, unless I play around with it. And I should be able to write a program in Liberty Basic, for example, to produce large scripts of test data, to load in with isql. And I should be able to export those large test tables too. And it should work as advertised, and not blow up. But I wont know unless I try it out. Actually, what I am trying to do right now is learn enough of either VBA (if I am going to work with Access) or Python (if I am going to work with Rekall) so that I can create a form which would deal with several tables, where my code would control what gets fetched and displayed, and limited to a small a number of rows as possible to accomplish whatever is necessary. I just want to learn a little SQL before I die, having come from an RPG COBOL Dbase world a long time ago. As much as I dislike the evil empire of Microsoft, there are tons of books and forums on VBA and Access, which is a distinct advantage. I had downloaded a version of Python for windows, and all it seemed to do was run inside a small DOS like window. Now, I see with Rekall, that it uses Python but runs in Windows, and not in a DOS terminal window. This weekend, I want to examine the Help tutorial that comes with Rekall, to see if I can find out how to write my own code to perform SQL queries and process the rows returned.The does not seem to be a lot in the search engines as far as tutorials or code examples. A lot more seems to be available for VBA. I do notice that Rekall will allow you to save the forms to a folder. And when I look at those forms in an editor, it seems to be the Python code. So perhaps I can learn something from studying the code in those forms which Rekall produces. I guess I would like it better if I could gain some proficiency in Rekall and Python, since it would free me from the M$ world of Access and VBA. The two companies that produce Rekall seem a bit odd to me right now, and raise some doubts in my mind about the future of the product, but I like what I see in the demo so far. I would just like to find a free, or low cost basic like language that would let easily paint GUI forms in some IDE, and then easily add my own script to interact with the Postgresql database. Such an a development environment, and such tutorials should be very common and easily accessible, since a sql engine is not much use unless you can access it. Probably Powerbasic or Realbasic would do what I want, and I would feel comfortable with them, but they are rather expensive. Anyway, thanks everyone for your tips and advice.
[GENERAL] Increasing statistics results in worse estimates
I'm having a problem with analyze in 8.0.2 that is really bothering me. I have a table that contains address, city and state with about 7.8m rows in it. On that table I have two non-unique indexes for city and zipcode respectively. This table was loaded in a manner such that it is essentially sorted by zipcode. Sorting by zipcode implies that there should exist a pretty strong correlation on the city column as well. With a statistics target of 50 on city I'm getting good estimates for row counts for arbitrary cities (ie: explain select count(*) from addresses where city = 'DALLAS' estimates 474k rows out of 500k actual) but a poor estimate for the correlation (0.13 according to pg_stats). This seems to be causing the planner to pick a table scan for select count(*) from test_zipc where city = 'DALLAS' (est 474k rows) vs picking an index scan for select count(*) from test_zipc where zipcode like '75%' (est 2m rows). Increasing the statistics target on the city column and re-analyzing the table seems to make the correlation estimate better (shows about 0.5) but the row count estimates are thrown off by 2 orders of magnitude in some cases. Repeating the above queries I get a row estimate of 8k for select count(*) from test_zipc where city = 'DALLAS' and a row estimate of 6m for the select count(*) from test_zipc where zipcode like '75%'. In this case, the planner picked an index scan for the city = 'X' condition but for what I feel are the wrong reasons because it under estimated the row count. Re-analyzing the table multiple times always shows about an 8k estimate. Is my data set that promblematic? Has anyone seen similar behavior? Any suggestions on how to improve these stats? Regards, Shelby Cain __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] info on strange error messages on postgresql
Joe Maldonado [EMAIL PROTECTED] writes: I have these messages on my 7.4.7 database log... TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61 chunks); 87315304 used TopTransactionContext: 57344 total in 3 blocks; 648 free (5 chunks); 56696 used DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used ... What's at the top and bottom of that? PG prints out a memory stats dump like this when it runs out of memory. The dump itself isn't much use to anyone but a developer; what you want to look into is what triggered it. The error message appearing just after (or maybe just before, I forget) should be relevant. followed by about 16GB of the following type of entries... pg_temp_1486707494: 2048 total in 1 blocks; 768 free (0 chunks); 1280 used Could you have been trying to vacuum a ridiculously large number of tables, or some such? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Postgresql and VBA vs Python
Like I said before get the personal/standard version of Delphi 6,7 or 2005, it's 99.99 and you can connect to postgres with it using third party component sets like Zeos. (2005 may not be available yet) The licence varies on the version, I think Delphi 6 personal allows for commercial developement if you buy the shrink wrapped version, the others may only allow non commercial development. There is another option you may not be aware of: http://www.icsharpcode.com/OpenSource/SD/Default.aspx Totally free and rivals Visual Studio.net and has a visual form developer. Check it out. You can use it with the Npgsql native .net data provider for Postgresql. Not as easy to develop DB apps as Delphi, but a better choice than Access. You can create a databound form in Delphi just as easily as access and you don't need to learn VBA or Python, just Pascal. Delphi is Pascal (object pascal to be exact) and is easy to learn and intuitive. It was taught at most high schools/universities before Java came around As far as the testing goes I am just try to get across the fact that the type of testing you are doing is kind of a waste of time on a client/server type system. Like I said before it's fine for Access or Dbase. Bringing huge results sets (250,000) can bring Oracle to it's knees if the conditions are right. Access is probably doing some kind of internal paging, it's not really bringing back 250,000 rows. Rekal on the other hand probably attempts to bring back all them and that's why it hangs. It can take a huge amount of time to bring back that many records across the wire, especially if you are on simple home network that is probably running at 10mbits per second in half duplex mode. Using a massive amount of rows to test your dev tools is flawed because of the fact one may be doing paging and the other is not. I would just like to find a free, or low cost basic like language that would let easily paint GUI forms in some IDE, and then easily add my own script to interact with the Postgresql database. Such an a development environment, and such tutorials should be very common and easily accessible, since a sql engine is not much use unless you can access it. Probably Powerbasic or Realbasic would do what I want, and I would feel comfortable with them, but they are rather expensive. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL]
unsubscribe me
Re: [GENERAL] oid or schema name of current plpgsql function
On Thu, Apr 28, 2005 at 10:37:47PM +0200, Matko Andjelinic wrote: Is there a way to know the OID of the current plpgsql function from inside the function? I'm not aware of a way to get the current function's OID in PL/pgSQL, but you can do it in C. What I really need is to extract the name of the schema where the function is stored. Why do you need to know the function's schema? What are you trying to do? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Postgresql and VBA vs Python
On Apr 28, 2005, at 5:04 PM, [EMAIL PROTECTED] wrote: I would just like to find a free, or low cost basic like language that would let easily paint GUI forms in some IDE, and then easily add my own script to interact with the Postgresql database. Such an a development environment, and such tutorials should be very common and easily accessible, since a sql engine is not much use unless you can access it. Probably Powerbasic or Realbasic would do what I want, and I would feel comfortable with them, but they are rather expensive. Maybe consider doing web-based forms. PHP is free and you can get the source of phpPgAdmin which shows how to do just about anything you might want including forms-based editing and query browsing. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL]
unsubscribe me