Re: [SQL] need to join successive log entries into one
George Young wrote: > > On Wed, 14 Mar 2001, you wrote: > > On 3/14/01, 5:24:12 PM, George Young <[EMAIL PROTECTED]> wrote regarding [SQL] > > I need to join successive log entries into one: > > > I have a table like: > > > > > run | seq | start| done > > > 1415|261| 2001-01-29 12:36:55| 2001-02-07 13:02:38 > > > 1415|263| 2001-02-14 07:40:04| 2001-02-15 16:05:04 > > > 1415|264| 2001-02-16 16:05:05| 2001-03-08 16:34:03 > > > 1415|265| 2001-03-08 16:34:04| > > > > Try: > > > > select run,min(start),max(done) from mytable group by run; > > Alas, this combines *all* entries for a given run, not just those that > are imediately adjacent (in time, or by 'seq' number)... I thought it was complicated, then I thought it was easy. Looks like I was right first time. I was thinking that some huge self-join might do it, but I can't see how to go beyond a run of two adjacent entries. The only thing I can think of is to add a "batch" column and build a trigger to set it as data is inserted. I'm assuming the entries are put in one at a time and in order. That way you just need to look at the last entry to determine if the new one is in the same batch. Any use? - Richard Huxton > -- > George Young, Rm. L-204[EMAIL PROTECTED] > MIT Lincoln Laboratory > 244 Wood St. > Lexington, Massachusetts 02420-9108(781) 981-2756 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Some questions about PLpgSql
Hi, all By using 'PLpgsql', is that possible to 1) check if a specific table exists? 2) check if an INSERT/UPDATE/ DELETE has done successfully? Is there some more documents or samples for PLpgsql except USER GUIDE and PostgreSQL Introduction & concept? THANK YOU JACK ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] How to cast text to cidr/inet
Hello, I am battleling around to get a text field into a cidr and inet type field. The documentation says things like CAST(val AS text) but inet and cidr does not seem to exist. However the following works: dhcp=# insert into dhcp_subnet(ipaddress) values('139.122.172/18'); INSERT 46172 1 dhcp=# select * from dhcp_subnet; id | ipaddress| description ++- 2 | 139.122.128/18 | (1 row) But the copying from table to table (text to cidr) does not work. Perhaps someone can give me some clues. Additional information (tables, queries, results are below). Best regards, Roelof Sondaar WM-data Zwolle B.V. Russenweg 5 P O Box 391 8000 AJ ZWOLLE The Netherlands * [EMAIL PROTECTED] * +31 (0) 384 977 366 *** Disclaimer This email is confidential and intended solely for the use of the individual to whom it is addressed. Any views or opinions presented are solely those of the author and do not necessarily represent those of WM-data Zwolle B.V. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing or copying of this email is strictly prohibited. If you have received this email in error please notify WM-data Zwolle B.V. Helpdesk by telephone on +31(0)384 977 319 *** Some additional information: dhcp=# \d dhcp_subnet Table "dhcp_subnet" Attribute | Type | Modifier -+-+ -- id | integer | not null default nextval('dhcp_subnet_id_seq'::text) ipaddress | cidr| description | text| Indices: dhcp_subnet_id_key, dhcp_subnet_ipaddress_idx dhcp=# \d dhcp_subnet_hp Table "dhcp_subnet_hp" Attribute | Type | Modifier -+--+-- ipaddress | text | description | text | dhcp=# INSERT INTO dhcp_subnet( dhcp(# ipaddress, dhcp(# description) dhcp-# SELECT dhcp-# ipaddress, dhcp-# description dhcp-# FROM dhcp_subnet_hp; ERROR: Attribute 'ipaddress' is of type 'cidr' but expression is of type 'text' You will need to rewrite or cast the expression dhcp=# INSERT INTO dhcp_subnet( dhcp(# ipaddress, dhcp(# description) dhcp-# SELECT dhcp-# CAST(ipaddress AS cidr), dhcp-# description dhcp-# FROM dhcp_subnet_hp; ERROR: Cannot cast type 'text' to 'cidr' ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Help with UPDATE syntax
Howdy, I am porting a bit of code from ORACLE to PostgreSQL 7.1 and am getting stuck on an update statment. I am NOT a SQL expert, merely a beginner swimming hard, so any help will be greatly appreciated. The specific query looks like this: begin transaction update user_group_map map set user_id = 4 where user_id = 9 not exists ( select * from user_group_map where user_id = 4 and group_id = map.group_id and role = map.role ) commit There are other updates taking place during the transaction, but this is the one for which I can't figure out the PostgreSQL equivalent. I've tried this: update user_group_map set user_id = 4 from user_group_map map where user_id = 9 and not exists ( select * from user_group_map ug2 where user_id = 4 and ug2.group_id = map.group_id and ug2.role = map.role); for the update replacement, but get an error: NOTICE: current transaction is aborted, queries ignored until end of transaction block As noted earlier, any guidance will be most appreciated. Thanks, Jeff. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [SQL] Some questions about PLpgSql
Hello jack, To check if a table exists you could use: select tablename from pg_tables; For instance: dhcp=# select count(*) from pg_tables where tablename='dhcp_subnet'; count --- 1 (1 row) dhcp=# select count(*) from pg_tables where tablename='dhcp_subnetaa'; count --- 0 (1 row) To see the system tables use \dS The second I don't really know what you would like to see. Perhaps a trigger could do this after the insert/update/delete ? Best regards, Roelof > -Original Message- > From: datactrl [SMTP:[EMAIL PROTECTED]] > Sent: 15 March 2001 12:04 > To: [EMAIL PROTECTED] > Subject: [SQL] Some questions about PLpgSql > > Hi, all > > By using 'PLpgsql', is that possible to > > 1) check if a specific table exists? > 2) check if an INSERT/UPDATE/ DELETE has done successfully? > > > Is there some more documents or samples for PLpgsql except USER GUIDE and > PostgreSQL Introduction & concept? > > THANK YOU > > JACK > > > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Help with UPDATE syntax
Jeff Putsch <[EMAIL PROTECTED]> writes: > update > user_group_map map Postgres doesn't allow UPDATE to use an alias for the target table (SQL92 doesn't either). Get rid of the alias name "map", and write the full table name "user_group_map" in the places where "map" is used in the WHERE clause. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] RE: Help with UPDATE syntax
Try eliminating the statement " from user_group_map map". It does not belong in the update. Here is the fully rewritten statement: update user_group_map set user_id = 4 where user_id = 9 and not exists ( select * from user_group_map ug2 where user_id = 4 and ug2.group_id = map.group_id and ug2.role = map.role); -Original Message- From: Jeff Putsch [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, March 14, 2001 11:47 PM To: [EMAIL PROTECTED] Subject:Help with UPDATE syntax Howdy, I am porting a bit of code from ORACLE to PostgreSQL 7.1 and am getting stuck on an update statment. I am NOT a SQL expert, merely a beginner swimming hard, so any help will be greatly appreciated. The specific query looks like this: begin transaction update user_group_map map set user_id = 4 where user_id = 9 not exists ( select * from user_group_map where user_id = 4 and group_id = map.group_id and role = map.role ) commit There are other updates taking place during the transaction, but this is the one for which I can't figure out the PostgreSQL equivalent. I've tried this: update user_group_map set user_id = 4 from user_group_map map where user_id = 9 and not exists ( select * from user_group_map ug2 where user_id = 4 and ug2.group_id = map.group_id and ug2.role = map.role); for the update replacement, but get an error: NOTICE: current transaction is aborted, queries ignored until end of transaction block As noted earlier, any guidance will be most appreciated. Thanks, Jeff. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] PL/PgSQL and NULL
Jie Liang wrote: > > I think that is a bug in plpgsql, > when passing a NULL into a plpgsql defined function, it treats > other arguments as NULL also, you can use raise notice in > your function to watch this buggy thing(see following). You're blaming the wrong code for it. It's an insufficience in the pre v7.1 function manager, not a bug in PL/pgSQL's handler. Jan > > Jie LIANG > > St. Bernard Software > > 10350 Science Center Drive > Suite 100, San Diego, CA 92121 > Office:(858)320-4873 > > [EMAIL PROTECTED] > www.stbernard.com > www.ipinc.com > > On Sun, 11 Mar 2001, Andrew Perrin wrote: > > > Greetings- > > > > I'm trying to write what should be a simple function that returns the > > minimim of two integers. The complication is that when one of the two > > integers is NULL, it should return the other; and when both are NULL, it > > should return NULL. Here's what I've written: > > > > CREATE FUNCTION min(int4, int4) > > RETURNS int4 > > AS 'BEGIN > raise notice ''arg1 is % arg2 is %'',$1,$2; -- debugging > > IF $1 ISNULL > > THEN > > RETURN $2; > > ELSE > > IF $2 ISNULL > > THEN > > RETURN $1; > > ELSE > >IF $1 > $2 > >THEN > > RETURN $2; > >ELSE > > RETURN $1; > >END IF; > > END IF; > > END IF; > > END;' > > LANGUAGE 'plpgsql'; > > > > and here's what I get: > > > > fgdata=# select min(10, NULL); > > min > > - > > > > (1 row) > > > > so it looks like, for whatever reason, it's returning NULL when it should > > be returning 10. Can anyone offer advice? > > > > Thanks. > > > > -- > > Andrew J Perrin - Ph.D. Candidate, UC Berkeley, Dept. of Sociology > > Chapel Hill, North Carolina, USA - http://demog.berkeley.edu/~aperrin > > [EMAIL PROTECTED] - [EMAIL PROTECTED] > > > > > > ---(end of broadcast)--- > > TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > > > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Some questions about PLpgSql
Jack, > 2) check if an INSERT/UPDATE/ DELETE has done successfully? This happens automatically, within a PL/pgSQL function. If the INSERT/UPDATE errors out, the function automatically halts. Actually, this kind of behaviour can be annoying the other way (sometimes one doesn't care about the error). Now, testing how many rows were inserted/updated/deleted ... that I'm not sure about. It would be nice to have a ROWS_AFFECTED returned from a data manipulation query in PL/pgSQL, but I don't believe that that has been implemented. > Is there some more documents or samples for PLpgsql except USER GUIDE and > PostgreSQL Introduction & concept? No. Some of us user-types are working on expanded documentation; until then, you'll just have to muddle through. -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] List Concatination
Richard, I wanted to thank you for the concatination suggestion ... on testing, a custom aggregate *was* faster than procedural concatination ... much faster. > But - if you don't care about the order of contacts you can define an > aggregate function: > > create aggregate catenate(sfunc1=textcat, basetype=text, stype1=text, initcond1=''); > > Then group by client and catenate(firstname || ' ' || lastname) > > You'll want to read the CREATE AGGREGATE page in the reference manual, > replace textcat with your own routine that adds a comma and you'll need > a finalisation routine to strip the final trailing comma. Actually, if you use a sub-select as your data source, you can control both the appearance and the order of the catenated values: SELECT client, catenate(con_name) FROM (SELECT client, (firstname || ' ' || lastname || ', ' FROM contacts ORDER BY lastname ASC) AS con_list GROUP BY client; This seems to work pretty well. > Note that this is probably not a good idea - the ordering of the > contacts will not be well-defined. When I asked about this Tom Lane was > quite surprised that it worked, so no guarantees about long-term suitability. Hmmm ... this feature is very, very, useful now that I know how to use it. I'd love to see it hang around for future versions of PgSQL. Tom? -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Some questions about PLpgSql
Josh Berkus <[EMAIL PROTECTED]> writes: > Now, testing how many rows were inserted/updated/deleted ... that I'm > not sure about. It would be nice to have a ROWS_AFFECTED returned from > a data manipulation query in PL/pgSQL, but I don't believe that that has > been implemented. It has as of 7.1 --- see GET DIAGNOSTICS foo = ROW_COUNT. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] List Concatination
Josh Berkus <[EMAIL PROTECTED]> writes: >> Note that this is probably not a good idea - the ordering of the >> contacts will not be well-defined. When I asked about this Tom Lane was >> quite surprised that it worked, so no guarantees about long-term >> suitability. > Hmmm ... this feature is very, very, useful now that I know how to use > it. I'd love to see it hang around for future versions of PgSQL. Tom? As I said before, user-defined aggregates are certainly not going away. I don't recall the conversation Richard was thinking of, so I'm not sure exactly what was at issue there. Most likely it was some fine point, not the basic existence of the feature. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] VACUUM kills Index Scans ?!
I'm confused over two question involving PostgreSQL index scans. I'm using Linux Mandrake 7.2 on Intel, and PostgreSQL 7.0.3. If someone can explain what's going on, I'd greatly appreciate it. - 1) When I create a empty table, and then immediate create an index on a column, I can get /index scans/ when searching on that column. But when I then run VACUUM, the same search becomes a /sequential scan/. Shouldn't it still be an index scan? What's going on here? test1=> create table t1 (a varchar(64), b int); CREATE test1=> create index t1_a_ndx on t1 (a); CREATE test1=> explain select * from t1 where a='asd'; NOTICE: QUERY PLAN: Index Scan using t1_a_ndx on t1 (cost=0.00..8.14 rows=10 width=16) EXPLAIN test1=> vacuum; NOTICE: Skipping "pg_type" --- only table owner can VACUUM it (a bunch of these) VACUUM test1=> explain select * from t1 where a='asd'; NOTICE: QUERY PLAN: Seq Scan on t1 (cost=0.00..0.00 rows=1 width=16) EXPLAIN test1=> - 2) If I already have some data in a table and I create an index on a column, why doesn't subsequent searches then change from sequential scans to index scans? test1=> create table t2 (a varchar(64), b int); CREATE test1=> insert into t2 values ('a', 1); INSERT 41255 1 test1=> insert into t2 values ('b', 2); INSERT 41256 1 test1=> insert into t2 values ('c', 3); INSERT 41257 1 test1=> explain select * from t2 where a='a'; NOTICE: QUERY PLAN: Seq Scan on t2 (cost=0.00..22.50 rows=10 width=16) EXPLAIN test1=> create index t2_a_ndx on t2 (a); CREATE test1=> explain select * from t2 where a='a'; NOTICE: QUERY PLAN: Seq Scan on t2 (cost=0.00..1.04 rows=1 width=16) EXPLAIN test1=> ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] VACUUM kills Index Scans ?!
As additional information that I neglected to include in the first message, after both tests, the indices clearly still exist and can be seed in the following commands: \d t1 \d t2 \di \d t1_a_ndx \d t2_a_ndx The output shows what's expected, e.g: test1=> \di List of relations Name | Type | Owner --+---+--- t1_a_ndx | index | gutz t2_a_ndx | index | gutz (1 row) test1=> \d t2_a_ndx Index "t2_a_ndx" Attribute | Type ---+--- a | varchar() btree At 11:42 AM 3/15/2001 -0800, Gerald Gutierrez wrote: >I'm confused over two question involving PostgreSQL index scans. I'm using >Linux Mandrake 7.2 on Intel, and PostgreSQL 7.0.3. If someone can explain >what's going on, I'd greatly appreciate it. > >- > >1) When I create a empty table, and then immediate create an index on a >column, I can get /index scans/ when searching on that column. But when I >then run VACUUM, the same search becomes a /sequential scan/. Shouldn't it >still be an index scan? What's going on here? > > >test1=> create table t1 (a varchar(64), b int); >CREATE >test1=> create index t1_a_ndx on t1 (a); >CREATE >test1=> explain select * from t1 where a='asd'; >NOTICE: QUERY PLAN: > >Index Scan using t1_a_ndx on t1 (cost=0.00..8.14 rows=10 width=16) > >EXPLAIN >test1=> vacuum; >NOTICE: Skipping "pg_type" --- only table owner can VACUUM it (a bunch of >these) >VACUUM >test1=> explain select * from t1 where a='asd'; >NOTICE: QUERY PLAN: > >Seq Scan on t1 (cost=0.00..0.00 rows=1 width=16) > >EXPLAIN >test1=> > >- > >2) If I already have some data in a table and I create an index on a >column, why doesn't subsequent searches then change from sequential scans >to index scans? > > >test1=> create table t2 (a varchar(64), b int); >CREATE >test1=> insert into t2 values ('a', 1); >INSERT 41255 1 >test1=> insert into t2 values ('b', 2); >INSERT 41256 1 >test1=> insert into t2 values ('c', 3); >INSERT 41257 1 >test1=> explain select * from t2 where a='a'; >NOTICE: QUERY PLAN: > >Seq Scan on t2 (cost=0.00..22.50 rows=10 width=16) > >EXPLAIN >test1=> create index t2_a_ndx on t2 (a); >CREATE >test1=> explain select * from t2 where a='a'; >NOTICE: QUERY PLAN: > >Seq Scan on t2 (cost=0.00..1.04 rows=1 width=16) > >EXPLAIN >test1=> > > >---(end of broadcast)--- >TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Re: Normalization is always good?
Jamu, > I have been working with SQL databases for about a year in a half and > in that time I've come to the conclusion that it is not always > preferable to normalize to a high level (like 3rd level). In some > cases, depending on the application, I've found that normalizing to a > ridiculous degree has shot me in the foot in terms of the > responsibility of my code to manage what happens in the database. There I'd agree with you ... there is the question of *how rigorous* you want your normalization to be. I tend to hover around the simplest normal form, most of the time. And I do a few things (such as polymorhic sub-tables) that would give Fabian Pascal fits :-) However, there is (in my mind) no question as to whether a database should be normalized, just how much effort is spent on normalization as opposed to other considerations (UI, performance, development time). > I've found referential integrity makes normalization a less expensive > notion as it absolves you of the task of keeping data current in all > your tables. I've also found that working in an object oriented > language makes normalization less expensive. What do you use? I've given up on OODB interfaces, myself, as I did not find them helpful, but that's modtly because I was working in MS-land. > What are the views of the people on this list re: Normalization > Guidelines? Anybody know of any good web sites that talk about this > in depth? Yup. http://www.databasedebunking.com/ Dig through the archives. -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] VACUUM kills Index Scans ?!
Gerald Gutierrez <[EMAIL PROTECTED]> writes: > 1) When I create a empty table, and then immediate create an index on a > column, I can get /index scans/ when searching on that column. But when I > then run VACUUM, the same search becomes a /sequential scan/. VACUUM updates the planner's statistics so that it knows the table is empty (note the change in cost estimates). The default numbers for a never-yet-vacuumed table (10 disk blocks and 1000 rows, IIRC) just happen to be large enough to cause an indexscan. Put in a reasonable amount of data and then repeat the VACUUM, and it'll go back to index scan. > 2) If I already have some data in a table and I create an index on a > column, why doesn't subsequent searches then change from sequential scans > to index scans? Again, you haven't got enough data to justify an indexscan. You need at least several disk blocks worth of data before an indexscan can possibly save more table I/O than it costs to read the index. There is an undocumented little factoid here: CREATE INDEX will update (some of) the planner stats, but only if it finds some data in the table. CREATE INDEX on an empty table leaves the initial default numbers alone. This may be contributing to your confusion, but it was deemed necessary so that the common sequence CREATE TABLE CREATE INDEX load data wouldn't leave the planner believing the table to be completely empty (and hence generating abysmally bad plans if you had actually loaded quite a bit of data). On the other hand, the preferred bulk-load method is CREATE TABLE load data CREATE INDEX and this leaves the planner's stats set correctly. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] VACUUM kills Index Scans ?!
On Thu, 15 Mar 2001, Gerald Gutierrez wrote: > 1) When I create a empty table, and then immediate create an index on a > column, I can get /index scans/ when searching on that column. But when I > then run VACUUM, the same search becomes a /sequential scan/. Shouldn't it > still be an index scan? What's going on here? > - > > 2) If I already have some data in a table and I create an index on a > column, why doesn't subsequent searches then change from sequential scans > to index scans? With a small number of rows, a sequence scan will require less reads/seeks from the filesystem. It's not always correct for the optimizer to choose to use an index even if it's there. If you put in lots of rows with distinct values and vacuum analyze (you want to do that rather than just vacuum) and do a comparison it should use the index, with only a few rows, the seq scan is probably better. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] VACUUM kills Index Scans ?!
>There is an undocumented little factoid here: CREATE INDEX will update >(some of) the planner stats, but only if it finds some data in the >table. CREATE INDEX on an empty table leaves the initial default >numbers alone. This may be contributing to your confusion, but it was >deemed necessary ... I understand now; it makes sense. I'll be using a number of tables that are initially very small, perhaps 5 or 10 records. But I expect that the tables will grow very quickly to several tens (or hundreds) of thousands of records. It seems reasonable to me that the table should then be set up to use index scan right from the beginning so that as the table grows the index scan will become more useful. Thus, the correct sequence for me is probably: > CREATE TABLE > CREATE INDEX > load data I also understand that VACUUM and VACUUM ANALYZE takes a significant amount of time and locks the tables that it works on. Does it do locking and unlocking per table as it goes through them (i.e. only lock while it's reading the table) or does it do something else? If the locks are for large amounts of time I'll have to shut down my application to avoid connections from timing out and JDBC exceptions from being thrown. Thanks for your help :) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] PL/pgSQL "compilation error"
Josh Berkus wrote: > This brings up an important point. We have a medium-large user base for > PL/pgSQL out there, but it appears that Jan Wieck no longer has time to > develop the language ... nor should he be the sole developer. Howe do > we find more developers to expand & extend PL/pgSQL? I'd be willing to > contribute financially, but can't afford to actuall hire somebody on my > own (and don't have the moxie to doe the programming!). Let me put it this way: PL/pgSQL never was something I wrote because I wanted to have it. Funny as it is, after I created PL/Tcl there where just as many questions about something independant of other packages as you can think of, and I simply wanted to get rid of these questions. This doesn't mean I'm uninterested in PL/pgSQL getting better. On the doc's it's just that I've been it's initial developer, not it's best user. So this kinda cookbook should've been better written by someone else and I see it found it's way. I definitely have plans to improve it after 7.1. This is a brief list of things I want to accomplish: 1. support for tuple and setof-tuple returns for full stored procedures. 2. Enhancement of the SPI manager and using these for real CURSOR support and from within FOR etc. loop handling. 3. Further enhancement of the SPI manager to detect usage of temporary objects (like temp tables) in a query, suppress plan preparation on those statements for better support of dynamic SQL. You're right in that there could be more PL/pgSQL handler developers. There have been contributions in the past, and that it have been a few only might be my fault not telling enough about the internals of the handler - it's an ugly piece of code anyway. Let me finish my movement to Virginia and see y'all in the 7.2 cycle. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] pl/Perl
Jie Liang wrote: > Tom, > > 1.Where or how I can get pltcl.so? I have not find this file anywhere in > my > source except a pltcl.c. > 2.Dose installation same as plpgsql? > i.e. > CREATE FUNCTION pltcl_call_handler () RETURNS OPAQUE AS > '/usr/local/pgsql/lib/pltcl.so' LANGUAGE 'C'; > CREATE TRUSTED PROCEDURAL LANGUAGE 'pltcl' > HANDLER pltcl_call_handler > LANCOMPILER 'PL/pgtcl'; No! Please switch to using the createlang shell script. > 3.Where I can find more doc about pltcl? Chapter 11 of the PostgreSQL users documentation. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] FETCH ... INTO in PL/pgSQL
Josh Berkus wrote: > Tom, Bruce, > >Is there any way to make use of the contents of a cursor in a PL/pgSQL > function? FETCH ... INTO isn't supported, according to the docs. Can I > use the dot notation, or something similar? PL/pgSQL doesn't support cursors at all. That's basically a fault of the SPI manager because every internal DB access from withing PL/pgSQL is done via SPI and that beast doesn't support cursors. As said, I want to work on that for 7.2. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Contribute to the PL/pgSQL CookBook !!
I have started the "PL/pgSQL CookBook" project. The goal is to create a cookbook of PL/pgSQL functions that will be catalogued and made available for others to use and learn from. Come to http://www.brasileiro.net/postgres and contribute your own PL/pgSQL (or PL/Tcl, PL/Perl) function or trigger! This will help many Postgres users, both novice and experienced, to use its procedural languages. The CookBook has several sections, and you can add your own. No login is required, just come and contribute. Once again http://www.brasileiro.net/postgres Oh, did I mention that you get your own "PostgreSQL Powered" button when you contribute a function/trigger? :) -Roberto -- +| http://fslc.usu.edu USU Free Software & GNU/Linux Club|--+ Roberto Mello - Computer Science, USU - http://www.brasileiro.net http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer Pimentus annus alter, refrescum est. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Contribute to the PL/pgSQL CookBook !!
I have started the "PL/pgSQL CookBook" project. The goal is to create a cookbook of PL/pgSQL functions that will be catalogued and made available for others to use and learn from. Come to http://www.brasileiro.net/postgres and contribute your own PL/pgSQL (or PL/Tcl, PL/Perl) function or trigger! This will help many Postgres users, both novice and experienced, to use its procedural languages. The CookBook has several sections, and you can add your own. No login is required, just come and contribute. Once again http://www.brasileiro.net/postgres Oh, did I mention that you get your own "PostgreSQL Powered" button when you contribute a function/trigger? :) -Roberto -- +| http://fslc.usu.edu USU Free Software & GNU/Linux Club|--+ Roberto Mello - Computer Science, USU - http://www.brasileiro.net http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer DOS = Damned Old Software ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] RE: Help with UPDATE syntax
From: "Michael Davis" <[EMAIL PROTECTED]> > Try eliminating the statement " from user_group_map map". It does not belong in the update. Here is the fully rewritten statement: > > update > user_group_map > set > user_id = 4 > where > user_id = 9 and > not exists ( > select * from >user_group_map ug2 > where > user_id = 4 and > ug2.group_id = map.group_id and ^^^ > ug2.role = map.role); ^^^ I take it these are actually "user_group_map"? - Richard Huxton ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Re: Normalization is always good?
Hi Josh, > normal form, most of the time. And I do a few things (such as > polymorhic sub-tables) that would give Fabian Pascal fits :-) Polymorphic sub-tables? =) Do you mean using, for example, one user table that stores different types of users and has some fields specific to only some kinds of users and other fields that are shared by all users? > However, there is (in my mind) no question as to whether a database > should be normalized, just how much effort is spent on normalization as > opposed to other considerations (UI, performance, development time). I agree. Anyone here familiar with "Extreme Programming"?- they suggest that you should architect your software only for the functionality you need NOW. While some aspects of extreme progrmming have proven themselves true in the face of my skepticism I see it as my moral obligation as programmer to write programs that are A) as portable as possible and B) extensible. With that in mind, I like normalization because it increases the likelihood of scalability being relatively painless. > What do you use? I've given up on OODB interfaces, myself, as I did not > find them helpful, but that's modtly because I was working in MS-land. I've been working in Java with JDBC... I've actually shot myself in the foot a bit by not making my application object oriented enough. In hindsight, and if time permitted more refactoring, I would have written custom data objects to manage all interactions with the database so that any management of data happened in one and only one place and anything that needed to access those objects would do so through the common interface. > Yup. http://www.databasedebunking.com/ Dig through the archives. Couldn't find the server... I wonder if it's still there. Regards, Jamu. -- Jamu Kakar (Developer) Expressus Design Studio, Inc. [EMAIL PROTECTED]708-1641 Lonsdale Avenue V: (604) 903-6994 North Vancouver, BC, V7M 2J5 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Re: Normalization is always good?
On Friday, 16. March 2001 00:28, [EMAIL PROTECTED] wrote: > > > Yup. http://www.databasedebunking.com/ Dig through the archives. > > Couldn't find the server... I wonder if it's still there. Try this one: http://www.firstsql.com/dbdebunk/ Regards, Christof -- gl.aser . software engineering . internet service http://gl.aser.de/ . Planckstraße 7 . D-39104 Magdeburg Tel. +49.391.7 44 77 10 . Fax +49.391.7 44 77 13 . Mobil 0177.77 92 84 3 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Re: Normalization is always good?
Jamu, > Polymorphic sub-tables? =) Do you mean using, for example, one user > table that stores different types of users and has some fields > specific to only some kinds of users and other fields that are shared > by all users? Nope. What you described is a "not normalized" table. ;-) One project contains a Notes table. However, I want it to be configurable to what records notes apply ... that is, my application has an admin option that allows you to configure the system so that there are or aren't Notes for Client Addresses, for example. TO do this, I created a sequence that is generally available ('universal_sq'), and set 7-9 tables to automatically increment a value from the 'universal_sq' (column 'usq') for each record. The Notes table, thus, effectively has the 'usq' as a foriegn key for 2 to 7 other tables, depending on user configuration. The wonderful PostgreSQL sequence handler makes this possible. Thanks, Tom & team! I've done this with a couple of other tables. Not relationally correct, but I can't find anything wrong with the idea. > Couldn't find the server... I wonder if it's still there. I believe that Christof posted the correct URL. -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] List Concatination
On 3/15/01, 5:02:37 PM, Tom Lane <[EMAIL PROTECTED]> wrote regarding Re: [SQL] List Concatination : > Josh Berkus <[EMAIL PROTECTED]> writes: > >> Note that this is probably not a good idea - the ordering of the > >> contacts will not be well-defined. When I asked about this Tom Lane was > >> quite surprised that it worked, so no guarantees about long-term > >> suitability. Sorry - issue was to do with the ordering of the concatenation, not the user-defined aggregates (iirc - it's getting late here). I do remember I got different orders when selecting and updating. In my case it didn't matter, and I'm guessing if the order reverses in your case when 8.x is released it's not the end of the world either. If you were joining words in a sentence, obviously it would matter (unless you were on usenet ;-) > > Hmmm ... this feature is very, very, useful now that I know how to use > > it. I'd love to see it hang around for future versions of PgSQL. Tom? > As I said before, user-defined aggregates are certainly not going away. > I don't recall the conversation Richard was thinking of, so I'm not sure > exactly what was at issue there. Most likely it was some fine point, > not the basic existence of the feature. > regards, tom lane No not at all - and sorry for any confusion. - Richard Huxton ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster