[GENERAL] Test, ignore ...
Just testing ... ignore ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] shorter way to get new value of serial?
Harald Armin Massa wrote: I have a table: CREATE TABLE rechner ( id_r int4 NOT NULL DEFAULT nextval('rechner_id_r_seq'::regclass), name text, CONSTRAINT rechner_pkey PRIMARY KEY (id_r) ) CREATE UNIQUE INDEX rechner_name ON rechner USING btree (name); and want to have the existing or new id of 'newobjekt' CREATE OR REPLACE FUNCTION getrechnerid( text) RETURNS int4 AS 'DECLARE result int4; BEGIN select id_r from rechner where name=upper($1) into result; IF not FOUND THEN select nextval(''swcheck_id_check_seq'') into result; insert into rechner (id_r, name) values (result, upper($1)); Why don't you just use the default? You could entirely do away with the 'result' variable that way: CREATE OR REPLACE FUNCTION getrechnerid( text) RETURNS int4 AS ' BEGIN select id_r from rechner where name=upper($1) into result; IF not FOUND THEN insert into rechner (name) values (upper($1)); END IF; ... -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede //Showing your Vision to the World// ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] shorter way to get new value of serial?
Alban Hertroys,Why don't you just use the default? You could entirely do away with the 'result' variable that way:because I need to know which is the new id of that computer. The other way would be insert if not exists and again select where name = ... - 1 statement more, and without a stored procedure even a roundtrip client - server - client more, which is expensive in WAN situations :( Harald-- GHUM Harald Massapersuasion python postgresqlHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607
[GENERAL] TSearch2 Questions
A few stupid questions: Where to get the latest version? Is http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ a dead site and the latest versions are always silently distributed with PG inside the contrib dir? How can I find out what version of TSearch2 I'm running? Is there active development? Are the patches provided on the site above for backup still needed, or are they already included in the versions that ship with 8.0.x? If not, why not? =) Or the better question, are any of those patches listed under Development included in the version that ships with recent PG versions? I'm playing a bit with it ATM. Indexing one Gigabyte of plain text worked well, with 10 GB I yet have some performance problems. I read the TSearch Tuning Guide and will start optimizing some things, but is it a realistic goal to index ~90GB plain text and get sub-second response times on hardware that ~4000 EUR can buy? Thanks in advance -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Testing again, ignore ...
Sorry folks, just trying to debug a bug .. Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Postgres Disconnection problems
On Fri, 2005-11-18 at 18:51, Otto Blomqvist wrote: Hi, We are using PostgresDAC 2.2.1 and PostgreSQL 8.0.2 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 4.0.0 20050412 (Red Hat 4.0.0-0.42). I perform a simple test as follows. 1. I connect to the database, which is located on a LAN. 2. I simulate Internet problems by unplugging the Ethernet cable of the client. There is no PSQL activity going on. 3. Plug the ethernet cable back in 4. Run some sql, which gives me a Postgres SQL error -1, Server closed connection unexpectedly So far so good. Problem is that the postmaster does not detect this connection as dead and keeps it idle for an unknown amount of time. This is a real problem for us because we use persistent connections to authorize access to a custom built 68030 based system, which has a limited number of slots that we can use. By not releasing a dead connection we are also holding that 68030 slot busy. The real issue here is that TCP keepalive keeps the connection alive for a long time. The default on linux boxen is 2+hours. In our production environment, we dropped tcp_keepalive to 5 minutes. There are four settings in the linux kernel: net.ipv4.tcp_keepalive_intvl = 75 net.ipv4.tcp_keepalive_probes = 9 net.ipv4.tcp_keepalive_time = 500 the keepalive_time tells the kernel how long to wait to ping a connection after it's gone quiet. The probes and intvl tell it how many times to try and re-awaken it and how long to wait between each. So, with the settings shown above, a dead connection will wait 8.3 minutes, then execute a ping (not really a ping, it's on a lower level than a real ping would be) and then will wait 1.25 minutes and do it 9 times. So, in this scenario, an idle connection left from a network failure will take just under 20 minutes to clear. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] SQL Help: Multiple LEFT OUTER JOINs
I need to generate a table of teachers, and the count of classes they taught in the past and are scheduled to teach in the future. id | last_name | totalfuture_class_count | past_class_count -+--+--++- 3 | Smith | 12 | 3 | 9 8 | Jones |0 | 0 | 0 table person id last_name table class id class_time table role id role_name -- for limiting to a type of teacher -- link tables table person_role person references person rolereferences role -- This table ties a person to a class, thus making them an instructor table instructors person references person class references class I can easily get instructors and the total count of their classes: SELECT person.id AS id, last_name, count(instructors.class) FROM person LEFT OUTER JOIN instructors ON (person.id = instructors.person), person_role WHERE person_role.person = person.id AND person_role.role = 3 -- limit to this type of teacher GROUP BY id, last_name; Here's where I'm missing something. Trying to do an outer join on to bring in the class row with its class_time column: SELECT person.id AS id, last_name, count(instructors.class) as total, sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (person LEFT OUTER JOIN instructors ON (person.id = instructors.person)) t LEFT OUTER JOIN class on ( t.class = class.id ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BY person.id, last_name; -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] TSearch2 Questions
On Mon, 21 Nov 2005, Hannes Dorbath wrote: A few stupid questions: Where to get the latest version? Is http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ a dead site and the latest versions are always silently distributed with PG inside the contrib dir? You should always use tsearch2 distributed with postgresql. We keep our version for testing purposes. Sometimes we publish backpatches (from CVS HEAD) for stable releases. How can I find out what version of TSearch2 I'm running? Is there active development? It's actively developed, see CVS HEAD commits. Main problem attacked is fully UTF-8 support. Also, we plan some other improvements. See http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo Are the patches provided on the site above for backup still needed, or are they already included in the versions that ship with 8.0.x? If not, why not? =) All patches already applied . Or the better question, are any of those patches listed under Development included in the version that ships with recent PG versions? right now, there is no patches you should be aware of. We plan to release UTF-8 support patch for 8.1 release. I'm playing a bit with it ATM. Indexing one Gigabyte of plain text worked well, with 10 GB I yet have some performance problems. I read the TSearch Tuning Guide and will start optimizing some things, but is it a realistic goal to index ~90GB plain text and get sub-second response times on hardware that ~4000 EUR can buy? What's ATM ? As for the sub-second response times it'd very depend on your data and queries. It'd be certainly possible with our tsearch daemon which we postponed, because we inclined to implement inverted indices first and then build fts index on top of inverted index. But this is long-term plan. Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Multi-parameter aggregates.
I'm interested in defining a covariance aggregate function. (As a refresher, remember that covariance is a little bit like variance, but is between two variables: cov(X,Y)= XY - XY, where the angular brackets in this case denote taking the averag. Variance is a special case when X and Y are the same.) But the whole user-defined aggregate thing is tough to get a handle on. I'm not even sure if the direction I'm heading in below will actually work, but as far as I got, I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument, so its use in SQL would look like, e.g., SELECT company, COVAR(year, sales) FROM annual_sales GROUP BY company; Here is what I tried, and I'm wondering if the team here can help me make this work (or tell me that the whole approach makes no sense, etc.). All the DDL executes without syntactical errors until the last function definition, and the problem is with the BASETYPE=numeric line, i.e., ERROR: AggregateCreate: function covariance_accum(numeric[], numeric) does not exist CREATE TYPE public._covariance AS (n integer, x numeric, y numeric, xy numeric); CREATE OR REPLACE FUNCTION public.covariance_accum(_covariance, numeric, numeric) RETURNS _covariance AS ' BEGIN _covariance.n := _covariance.n+1; _covariance.x := _covariance.x+$2; _covariance.y := _covariance.x+$3; _covariance.xy:= _covariance.xy+($1*$2); END; 'LANGUAGE 'plpgsql' IMMUTABLE STRICT; GRANT EXECUTE ON FUNCTION public.covariance_accum(_covariance, numeric, numeric) TO public; COMMENT ON FUNCTION public.covariance_accum(_covariance, numeric, numeric) IS 'covariance aggregate transition function'; -- Need to include a check for N equal zero data points CREATE OR REPLACE FUNCTION public.numeric_covariance(_covariance) RETURNS numeric AS ' BEGIN (_covariance.xy/_covariance.n) - (_covariance.x/_covariance.n)*(_covariance.y/_covariance.n); END; 'LANGUAGE 'plpgsql' IMMUTABLE STRICT; GRANT EXECUTE ON FUNCTION public.numeric_covariance(_covariance) TO public; COMMENT ON FUNCTION public.numeric_covariance(_covariance) IS 'covariance aggregate final function'; CREATE AGGREGATE public.covariance( BASETYPE=numeric, SFUNC=covariance_accum, STYPE=numeric[], FINALFUNC=numeric_covariance, INITCOND='{0,0,0,0}' ); /* --I also tried this: CREATE AGGREGATE covariance( BASETYPE='numeric, numeric', SFUNC=covariance_accum, STYPE=numeric[], FINALFUNC=numeric_covariance, INITCOND='{0,0,0,0}' ); -- to no avail. */ Regards, Berend begin:vcard fn:Berend Tober n:Tober;Berend org:Seaworthy Systems, Inc. adr:;;22 Main Street;Centerbrook;CT;06409;USA email;internet:[EMAIL PROTECTED] tel;work:860-767-9061 url:http://www.seaworthysys.com version:2.1 end:vcard ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
On Mon, Nov 21, 2005 at 05:40:10 -0800, Bill Moseley [EMAIL PROTECTED] wrote: Here's where I'm missing something. Trying to do an outer join on to bring in the class row with its class_time column: You don't say exactly why you are having a problem with this, but I think you would be better off doing an inner join between instructors and class and then do an outer join of that result to person. SELECT person.id AS id, last_name, count(instructors.class) as total, sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (person LEFT OUTER JOIN instructors ON (person.id = instructors.person)) t LEFT OUTER JOIN class on ( t.class = class.id ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BY person.id, last_name; -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] TSearch2 Questions
On Mon, Nov 21, 2005 at 16:50:00 +0300, Oleg Bartunov oleg@sai.msu.su wrote: On Mon, 21 Nov 2005, Hannes Dorbath wrote: I'm playing a bit with it ATM. Indexing one Gigabyte of plain text worked well, with 10 GB I yet have some performance problems. I read the TSearch Tuning Guide and will start optimizing some things, but is it a realistic goal to index ~90GB plain text and get sub-second response times on hardware that ~4000 EUR can buy? What's ATM ? As for the sub-second response times it'd very depend on your data and queries. It'd be certainly possible with our tsearch daemon which we postponed, because we inclined to implement inverted indices first and then build fts index on top of inverted index. But this is long-term plan. I believe in this context, 'ATM' is an ancronym for 'at the moment' which has little impact on the meaning of the paragraph. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Multi-parameter aggregates.
Berend Tober wrote: I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument This is currently unsupported. Joe ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Multi-parameter aggregates.
Hello Berend, have you considered using pl/r. http://www.joeconway.com/plr/ I think R got a covariance-function. http://www.r-project.org/ Best regards Hakan Kocaman Software-Developer digame.de GmbH Richard-Byrd-Str. 4-8 50829 Köln Tel.: +49 (0) 221 59 68 88 31 Fax: +49 (0) 221 59 68 88 98 Email: [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Berend Tober Sent: Monday, November 21, 2005 5:03 PM To: pgsql-general@postgresql.org Subject: [GENERAL] Multi-parameter aggregates. I'm interested in defining a covariance aggregate function. (As a refresher, remember that covariance is a little bit like variance, but is between two variables: cov(X,Y)= XY - XY, where the angular brackets in this case denote taking the averag. Variance is a special case when X and Y are the same.) But the whole user-defined aggregate thing is tough to get a handle on. I'm not even sure if the direction I'm heading in below will actually work, but as far as I got, I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument, so its use in SQL would look like, e.g., SELECT company, COVAR(year, sales) FROM annual_sales GROUP BY company; Here is what I tried, and I'm wondering if the team here can help me make this work (or tell me that the whole approach makes no sense, etc.). All the DDL executes without syntactical errors until the last function definition, and the problem is with the BASETYPE=numeric line, i.e., ERROR: AggregateCreate: function covariance_accum(numeric[], numeric) does not exist CREATE TYPE public._covariance AS (n integer, x numeric, y numeric, xy numeric); CREATE OR REPLACE FUNCTION public.covariance_accum(_covariance, numeric, numeric) RETURNS _covariance AS ' BEGIN _covariance.n := _covariance.n+1; _covariance.x := _covariance.x+$2; _covariance.y := _covariance.x+$3; _covariance.xy:= _covariance.xy+($1*$2); END; 'LANGUAGE 'plpgsql' IMMUTABLE STRICT; GRANT EXECUTE ON FUNCTION public.covariance_accum(_covariance, numeric, numeric) TO public; COMMENT ON FUNCTION public.covariance_accum(_covariance, numeric, numeric) IS 'covariance aggregate transition function'; -- Need to include a check for N equal zero data points CREATE OR REPLACE FUNCTION public.numeric_covariance(_covariance) RETURNS numeric AS ' BEGIN (_covariance.xy/_covariance.n) - (_covariance.x/_covariance.n)*(_covariance.y/_covariance.n); END; 'LANGUAGE 'plpgsql' IMMUTABLE STRICT; GRANT EXECUTE ON FUNCTION public.numeric_covariance(_covariance) TO public; COMMENT ON FUNCTION public.numeric_covariance(_covariance) IS 'covariance aggregate final function'; CREATE AGGREGATE public.covariance( BASETYPE=numeric, SFUNC=covariance_accum, STYPE=numeric[], FINALFUNC=numeric_covariance, INITCOND='{0,0,0,0}' ); /* --I also tried this: CREATE AGGREGATE covariance( BASETYPE='numeric, numeric', SFUNC=covariance_accum, STYPE=numeric[], FINALFUNC=numeric_covariance, INITCOND='{0,0,0,0}' ); -- to no avail. */ Regards, Berend ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] to create a database...
Can you describe the versatility you're looking for? Why does your application need to run on multiple DB's? Is the user going to be selecting a backend database? I'd consider how compelling your reasons for supporting multiple databases are, because it will be quite a bit of work even with compatibility layers, and you could get bogged down in supporting different trees of SQL before you even finish the app. If you want a full database abstraction layer, you might want to consider NHibernate (http://www.nhibernate.org/). Whether or not you use something that extensive, you will certainly need to take care to abstract your database completely away from your program's operation. NHibernate has some rudimentary support for creating schemas from classes, but AFAIK it's not that complete. Functions and stored procedures and stored procedures tend to be very specific to one RDBMS, so they work against that kind of abstraction, and I'd avoid using them unless you absolutely need to. Another alternative is PHP and ADODB, which works like the ADO recordset interface in VB6. This really works best for SELECTs, thought works OK for INSERTs. It has a number of nice optimization features as well. Yet another option for multiple database support that is a lot more work, but totally portable is to expose your data via Web Services rather than making direct database calls. That's useful when you want to be able to hook different things up to your app, and frees you from reliance on a single language or interface for the UI. The work to do it is generally only justified if you've got a big application with lots of people interacting with it. Not to be repetive, but I'd really think carefully about how much you need support for multiple RDBMS in your application. Hope some of the above helps. Cheers, Eric P.M wrote: Hi, I'm creating a Visual Basic.net application and i would like to make my application versatile. For that, it should be able to create the same DB on different RDBMS like PostGreSQL, MySQL or MSSQL. I would like to know what is the best way to create a DB from scratch... (DB, table, users, functions,...) - to use a SQL text file - to create a new function inside template1 db, and to execute it after, - ... please, help me. thanks a lot, Maileen Yahoo! FareChase - Search multiple travel sites in one click. http://us.lrd.yahoo.com/_ylc=X3oDMTFqODRtdXQ4BF9TAzMyOTc1MDIEX3MDOTY2ODgxNjkEcG9zAzEEc2VjA21haWwtZm9vdGVyBHNsawNmYw--/SIG=110oav78o/**http%3a//farechase.yahoo.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Multi-parameter aggregates.
Joe Conway wrote: Berend Tober wrote: I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument But I guess it _could_ take an array as argument, maybe even a record (postgresql pseudonym for what's called a structure in C). You'd use it with the following syntax (array): select covariance(array[x, y]) from t ; or, for a record-parameter (I didn't test it - I just guess that it should work...) select covariance((x, y)) from t ; greetings, Florian Pflug ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Multi-parameter aggregates.
Berend Tober [EMAIL PROTECTED] writes: I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument, That's because there isn't any way to do that. It's on the TODO list I believe. In the meantime, you could possibly kluge it up by defining a composite type to be the aggregate's argument, and calling it like SELECT covariance(row(...)) FROM ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Multi-parameter aggregates.
On Mon, Nov 21, 2005 at 11:03:22AM -0500, Berend Tober wrote: I'm interested in defining a covariance aggregate function. (As a refresher, remember that covariance is a little bit like variance, but is between two variables: cov(X,Y)= XY - XY, where the angular brackets in this case denote taking the averag. Variance is a special case when X and Y are the same.) But the whole user-defined aggregate thing is tough to get a handle on. I'm not even sure if the direction I'm heading in below will actually work, but as far as I got, I'm stuck on not knowing how to define a aggregate that takes more that one variable as its argument, so its use in SQL would look like, e.g., SELECT company, COVAR(year, sales) FROM annual_sales GROUP BY company; I think aggregates must take a single value, so the above won't work as written. However, in PostgreSQL 8.0 or later you could define the aggregate's base type to be a composite type and do something like SELECT company, COVAR(ROW(year, sales)) FROM annual_sales GROUP BY company; You'd create the aggregate like this: CREATE TYPE covar_state AS (...); CREATE TYPE xypair AS (x numeric, y numeric); CREATE FUNCTION covar_accum(covar_state, xypair) RETURNS covar_state AS ... CREATE FUNCTION covar_final(covar_state) RETURNS numeric AS ... CREATE AGGREGATE covar ( BASETYPE = xypair, SFUNC = covar_accum, FINALFUNC = covar_final, STYPE = covar_state, INITCOND = '(...)' ); -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Rule appears not to fire on insert w/ except
First version of this I sent this morning did not appear to go through. Please disregard if you received the first one. -- Original Message -- Hello All, We have finally tracked down a bug in our application to a rewrite rule on a table. In essence, the rewrite rule in question logs any inserts to another table. This works correctly in all cases except where an except clause is used in the insert statement. In this case, the rows are inserted into the primary table as expected, but the rule either does not fire, or fires in such a way that nothing is placed in the changes table. We have deduced that this is either a PG bug, or it is some side effect of the sql rewrite which is causing unexpected behavior (for us). I'm sure it's probably the latter, but we are scratching our heads as to why that might be. Can one of the gurus help us understand what is going on in this case? As a side note, is there a way to see the final sql after all rewrite rules have been processed? It might help us understand what is going on. This is in pg 8.0.4 (8.0.3 as well). I would be interested to know if the same behavior happens in 8.1. Thanks, -Chris Test Cases -- Not working case, insert w/ except clause begin; create table test1(id serial, data text); create table test2(id serial, data text); insert into test2(data) values('abc'); create table test_que(row_id integer); CREATE OR REPLACE RULE debug_rule AS ON INSERT TO test1 do INSERT INTO test_que (row_id) VALUES (new.id); insert into test1 select id,data from test2 except select id,data from test1; -- We will have 1 row inserted select * from test1; -- But no rows here even though a row was placed in test1 select * from test_que; rollback; -- Working Case, insert is identical w/o the except clause begin; create table test1(id serial, data text); create table test2(id serial, data text); insert into test2(data) values('abc'); create table test_que(row_id integer); CREATE OR REPLACE RULE debug_rule AS ON INSERT TO test1 do INSERT INTO test_que (row_id) VALUES (new.id); -- Insert w/o except clause insert into test1 select id,data from test2; -- Now we have 1 row in test1 select * from test1; -- And this time the rewrite rule triggered and -- we have 1 row in test_que select * from test_que; rollback; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Rule appears not to fire on insert w/ except
Chris Kratz [EMAIL PROTECTED] writes: CREATE OR REPLACE RULE debug_rule AS ON INSERT TO test1 do INSERT INTO test_que (row_id) VALUES (new.id); You would be a whole lot better off doing this with a trigger. insert into test1 select id,data from test2 except select id,data from test1; I believe the problem with this is that the rule re-evaluates the command to generate the new results, and by that point the rows have already been inserted into test1, thus disappear from the result of the except ... regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] About not to see insertion result INSERT 0 1
Greetings, I tried to run insert command from a .sql file. For example, in a.sql file there are 100,000 lines like insert into t1 values(... ...); insert into t1 values(... ...); insert into t1 values(... ...); ... ... I do not want to see the 100,000 times INSERT 0 1 displayed by postgreSQL. Is there a way to hide the output INSERT 0 1 generated by postgresql ? Thanks a lot! Emi ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] About not to see insertion result INSERT 0 1
Emi Lu [EMAIL PROTECTED] writes: I do not want to see the 100,000 times INSERT 0 1 displayed by postgreSQL. Is there a way to hide the output INSERT 0 1 generated by postgresql ? psql -q I think ... read the man page. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] About not to see insertion result INSERT 0 1
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, On Mon, 21 Nov 2005, Emi Lu wrote: Is there a way to hide the output INSERT 0 1 generated by postgresql ? Use psql with -q. Regards, - -- Devrim GUNDUZ Kivi Bilişim Teknolojileri - http://www.kivi.com.tr devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFDgjKe4zE8DGqpiZARAtuDAJ0SBgG0F3NFnE7ViuL8Cfdo0UhyeQCfRDYz INpBiSBxJf3/bdT60asEsb8= =yJ8q -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] About not to see insertion result INSERT 0 1
Thanks a lot! That is exactly what I want. - Emi On Mon, 21 Nov 2005, Emi Lu wrote: Is there a way to hide the output INSERT 0 1 generated by postgresql ? Use psql with -q. Regards, - -- Devrim GUNDUZ Kivi Bilişim Teknolojileri - http://www.kivi.com.tr devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFDgjKe4zE8DGqpiZARAtuDAJ0SBgG0F3NFnE7ViuL8Cfdo0UhyeQCfRDYz INpBiSBxJf3/bdT60asEsb8= =yJ8q -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Multi-parameter aggregates.
Hakan Kocaman wrote: have you considered using pl/r. http://www.joeconway.com/plr/ I think R got a covariance-function. http://www.r-project.org/ That would be, like, the easy way. Thanks! Berend begin:vcard fn:Berend Tober n:Tober;Berend org:Seaworthy Systems, Inc. adr:;;22 Main Street;Centerbrook;CT;06409;USA email;internet:[EMAIL PROTECTED] tel;work:860-767-9061 url:http://www.seaworthysys.com version:2.1 end:vcard ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Multi-parameter aggregates.
Michael Fuhr wrote: On Mon, Nov 21, 2005 at 11:03:22AM -0500, Berend Tober wrote: I'm interested in defining a covariance aggregate function. I think aggregates must take a single value, so the above won't work as written. However, in PostgreSQL 8.0 or later you could define the aggregate's base type to be a composite type Thanks. I briefly explored something like that, using the existing POINT data type, but didn't press it too far, pending mailling list advice. Regards, Berend begin:vcard fn:Berend Tober n:Tober;Berend org:Seaworthy Systems, Inc. adr:;;22 Main Street;Centerbrook;CT;06409;USA email;internet:[EMAIL PROTECTED] tel;work:860-767-9061 url:http://www.seaworthysys.com version:2.1 end:vcard ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
On Mon, Nov 21, 2005 at 11:45:34AM -0600, Bruno Wolff III wrote: On Mon, Nov 21, 2005 at 05:40:10 -0800, Bill Moseley [EMAIL PROTECTED] wrote: Here's where I'm missing something. Trying to do an outer join on to bring in the class row with its class_time column: You don't say exactly why you are having a problem with this, but I think you would be better off doing an inner join between instructors and class and then do an outer join of that result to person. Sorry, I thought I was so far off it might be obvious. I suspect I'm making the query harder than it really is. This query just eats CPU and doesn't seem to finish, but I didn't let it run more than a minute (which is forever as far as I'm concerned). The tables are not that big (10,000 people, 1500 classes) SELECT person.id AS id, last_name, count(instructors.class) as total, sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (person LEFT OUTER JOIN instructors ON (person.id = instructors.person)) t LEFT OUTER JOIN class on ( t.class = class.id ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BY person.id, last_name; Well, I'm stabbing in the dark now. You mean like: SELECT person.id AS id, first_name, last_name, count(instructors.class) as total_classes, sum (CASE WHEN class.id IS NULL THEN 0 ELSE 1 END) as total_class_count, -- which is better? sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (class INNER JOIN instructors ON ( class.id = instructors.class )) t LEFT OUTER JOIN person ON ( person.id = t.person ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BYperson.id, first_name, last_name; Still eats CPU. GroupAggregate (cost=1750458.67..1890662.91 rows=10212 width=39) - Sort (cost=1750458.67..1767958.67 rows=700 width=39) Sort Key: person.id, person.first_name, person.last_name - Nested Loop (cost=111.27..140276.35 rows=700 width=39) - Nested Loop (cost=91.27..256.35 rows=7000 width=35) - Hash Join (cost=71.27..96.35 rows=7 width=31) Hash Cond: (outer.id = inner.class) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=12) - Hash (cost=71.25..71.25 rows=7 width=27) - Nested Loop (cost=3.20..71.25 rows=7 width=27) - Hash Join (cost=3.20..30.77 rows=7 width=12) Hash Cond: (outer.person = inner.person) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=8) - Hash (cost=3.01..3.01 rows=75 width=4) - Index Scan using person_role_role_index on person_role (cost=0.00..3.01 rows=75 width=4) Index Cond: (role = 3) - Index Scan using person_pkey on person (cost=0.00..5.77 rows=1 width=23) Index Cond: (outer.person = person.id) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=4) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=4) (22 rows) -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Trouble downloading Postgres
I'm trying to download the source for 8.1 but am having a problem from both IE and Mozilla browsers. From either browser, clicking on a donwload object link leads me here: Choose a download mirror Downloading: /binary/v8.1.0/win32/postgresql-8.1.0-2.zip We could not query the database or no mirrors could be found! Download PostgreSQL from the primary site ...and clicking on that link to the primary site results in "ftp.postgresql.org" not found in Mozilla - but that's probably a red herring b/c in IE I get taken to the FTP directory, but here I click on e.g. the pub directory (uh is that where I should be going?) and I get this: The requested item could not be loaded by the proxy. The maximum number of concurrent connections has been reached. Please try again later, or try one of our many mirror sites listed at: http://wwwmaster.postgresql.org/download/mirrors-ftp Login incorrect. This same message happens regardless of which folder I choose. Should I just try again later, or is there a more serious problem? Thanks for any help - and if you would please, reply to me directly as I don't receive postings from this list. - Gary Horton 303-272-9140
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
It looks to me like your problem is that weird area where you alias your inner join as t and thenn inner join based on this alias. You're getting a cartesian product somewhere, as evidenced by the rows=700 in your explain. I already deleted the old mail with your table structure, but try changing that FROM section to: class INNER JOIN instructors ON class.id = instructors.class LEFT OUTER JOIN person ON person.id = (whatevertable).person Bill Moseley wrote: Well, I'm stabbing in the dark now. You mean like: SELECT person.id AS id, first_name, last_name, count(instructors.class) as total_classes, sum (CASE WHEN class.id IS NULL THEN 0 ELSE 1 END) as total_class_count, -- which is better? sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (class INNER JOIN instructors ON ( class.id = instructors.class )) t LEFT OUTER JOIN person ON ( person.id = t.person ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BYperson.id, first_name, last_name; Still eats CPU. GroupAggregate (cost=1750458.67..1890662.91 rows=10212 width=39) - Sort (cost=1750458.67..1767958.67 rows=700 width=39) Sort Key: person.id, person.first_name, person.last_name - Nested Loop (cost=111.27..140276.35 rows=700 width=39) - Nested Loop (cost=91.27..256.35 rows=7000 width=35) - Hash Join (cost=71.27..96.35 rows=7 width=31) Hash Cond: (outer.id = inner.class) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=12) - Hash (cost=71.25..71.25 rows=7 width=27) - Nested Loop (cost=3.20..71.25 rows=7 width=27) - Hash Join (cost=3.20..30.77 rows=7 width=12) Hash Cond: (outer.person = inner.person) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=8) - Hash (cost=3.01..3.01 rows=75 width=4) - Index Scan using person_role_role_index on person_role (cost=0.00..3.01 rows=75 width=4) Index Cond: (role = 3) - Index Scan using person_pkey on person (cost=0.00..5.77 rows=1 width=23) Index Cond: (outer.person = person.id) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=4) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=4) (22 rows) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Any good HOWTOs on LDAP with PostgreSQL 8.1?
Can anyone point me to a good HOW TO on LDAP with postgreSQL 8.0.x or 8.1? Also, while searching online, i came across dblink-ldap. Is this another ldap utility or is it something i could use to connect/link to a ldap application? thanks, vish
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
I just noticed, also goofy is your , person_role in your from with no criteria. I would generally put the person_role.person = person.id as an INNER JOIN, and then only have the person_role.role=3 in the where. It doesn't look like that's the specific problem, but I generally find that kind of mixed syntax muddles a query. John McCawley wrote: Well, I'm stabbing in the dark now. You mean like: SELECT person.id AS id, first_name, last_name, count(instructors.class) as total_classes, sum (CASE WHEN class.id IS NULL THEN 0 ELSE 1 END) as total_class_count, -- which is better? sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM (class INNER JOIN instructors ON ( class.id = instructors.class )) t LEFT OUTER JOIN person ON ( person.id = t.person ), person_role WHERE person_role.person = person.id AND person_role.role = 3 GROUP BYperson.id, first_name, last_name; Still eats CPU. GroupAggregate (cost=1750458.67..1890662.91 rows=10212 width=39) - Sort (cost=1750458.67..1767958.67 rows=700 width=39) Sort Key: person.id, person.first_name, person.last_name - Nested Loop (cost=111.27..140276.35 rows=700 width=39) - Nested Loop (cost=91.27..256.35 rows=7000 width=35) - Hash Join (cost=71.27..96.35 rows=7 width=31) Hash Cond: (outer.id = inner.class) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=12) - Hash (cost=71.25..71.25 rows=7 width=27) - Nested Loop (cost=3.20..71.25 rows=7 width=27) - Hash Join (cost=3.20..30.77 rows=7 width=12) Hash Cond: (outer.person = inner.person) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=8) - Hash (cost=3.01..3.01 rows=75 width=4) - Index Scan using person_role_role_index on person_role (cost=0.00..3.01 rows=75 width=4) Index Cond: (role = 3) - Index Scan using person_pkey on person (cost=0.00..5.77 rows=1 width=23) Index Cond: (outer.person = person.id) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on instructors (cost=0.00..20.00 rows=1000 width=4) - Materialize (cost=20.00..30.00 rows=1000 width=4) - Seq Scan on class (cost=0.00..20.00 rows=1000 width=4) (22 rows) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Anomalies with the now() function
[Please copy the mailing list on replies and please don't change the Subject header. I've restored the original subject and am copying the list on this reply. I'll look at this when I get a chance, but maybe somebody else will answer before then.] On Mon, Nov 21, 2005 at 05:28:30PM -, Byrne Kevin-kbyrne01 wrote: Thank you for your response and I would be intereted in your opinion on a further explanation of my problem: The first table table (call it Table A) gets the timestamp using timeofday - and assuming what you say below is true - Table B will take now() as start of transaction time - so I can understand that part where they could be different but considering that for moServer every operation happens in a single txn, following will be the case Moserver receives the event - timestamps it as 't1' -- time lapse before moserver computes the transation and gives it to odbc. Txn_begin- now() gets frozen to 't2' Insert - now() should put it as 't2' Txn_end()- done. So firstly t2 should always be t1 and the difference could be a few seconds but we found sometimes t2 t1!.. The query is done through odbc ( I think that might be additionally causing some strange behaviour?).. What do you think? -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Anomalies with the now() function
On 11/21/05, Michael Fuhr [EMAIL PROTECTED] wrote: [Please copy the mailing list on replies and please don't change the Subject header. I've restored the original subject and am copying the list on this reply. I'll look at this when I get a chance, but maybe somebody else will answer before then.] On Mon, Nov 21, 2005 at 05:28:30PM -, Byrne Kevin-kbyrne01 wrote: Thank you for your response and I would be intereted in your opinion on a further explanation of my problem: The first table table (call it Table A) gets the timestamp using timeofday - and assuming what you say below is true - Table B will take now() as start of transaction time - so I can understand that part where they could be different but considering that for moServer every operation happens in a single txn, following will be the case Moserver receives the event - timestamps it as 't1' -- time lapse before moserver computes the transation and gives it to odbc. Txn_begin- now() gets frozen to 't2' Insert - now() should put it as 't2' Txn_end()- done. where is the server in moserver or in another machine you reach with the odbc connection? if there are two machines involved maybe the first time you get the time ;) is from one machine and the second time from the other... just an idea... So firstly t2 should always be t1 and the difference could be a few seconds but we found sometimes t2 t1!.. The query is done through odbc ( I think that might be additionally causing some strange behaviour?).. What do you think? -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- Atentamente, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Rule appears not to fire on insert w/ except
On Monday 21 November 2005 03:35 pm, you wrote: Chris Kratz [EMAIL PROTECTED] writes: CREATE OR REPLACE RULE debug_rule AS ON INSERT TO test1 do INSERT INTO test_que (row_id) VALUES (new.id); You would be a whole lot better off doing this with a trigger. insert into test1 select id,data from test2 except select id,data from test1; I believe the problem with this is that the rule re-evaluates the command to generate the new results, and by that point the rows have already been inserted into test1, thus disappear from the result of the except ... regards, tom lane We were afraid of that. Thanks for the info Tom. I'm not entirely sure I grasp exactly why it's going on, but we will switch over to a trigger. Thanks, -Chris ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Trouble downloading Postgres
I'm trying to download the source for 8.1 but am having a problem from both IE and Mozilla browsers. From either browser, clicking on a donwload object link leads me here: Hi. This should be fixed by now, thanks for reporting it. (There were serious problems, but they were solved a short while back) //Magnus ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Any good HOWTOs on LDAP with PostgreSQL 8.1?
Can anyone point me to a good HOW TO on LDAP with postgreSQL 8.0.x or 8.1? Also, while searching online, i came across dblink-ldap. Is this another ldap utility or is it something i could use to connect/link to a ldap application? If what you're trying to do is access LDAP data from inside the database like a view or such, yes, dblink-ldap is what you should use. //Magnus ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Trouble downloading Postgres
Yes, thanks very much, I was just a few minutes ago able to get back in and get this done. Thanks! -gh Magnus Hagander wrote: I'm trying to download the source for 8.1 but am having a problem from both IE and Mozilla browsers. From either browser, clicking on a donwload object link leads me here: Hi. This should be fixed by now, thanks for reporting it. (There were serious problems, but they were solved a short while back) //Magnus
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
On Mon, Nov 21, 2005 at 03:25:56PM -0600, John McCawley wrote: I just noticed, also goofy is your , person_role in your from with no criteria. I would generally put the person_role.person = person.id as an INNER JOIN, and then only have the person_role.role=3 in the where. It doesn't look like that's the specific problem, but I generally find that kind of mixed syntax muddles a query. I need to read more about the FROM clause, as I can't seem to get what you are suggesting. The now working query (thanks to you!) is: SELECT person.id AS id, first_name, last_name, count(instructors.class) as total_classes, -- sum (CASE WHEN class.id IS NULL THEN 0 ELSE 1 END) as total_class_count, sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM class INNER JOIN instructors ON class.id = instructors.class LEFT OUTER JOIN person ON person.id = instructors.person, person_role WHERE person_role.person = person.id AND person_role.role = 2 GROUP BYperson.id, first_name, last_name ORDER BYfuture_class_count; Not sure how to construct that. Not this, as it returns odd counts SELECT person.id AS id, count(instructors.class) as total_classes, sum (CASE WHEN class.id IS NULL THEN 0 ELSE 1 END) as total_class_count, -- which is better? sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM class INNER JOIN instructors ON class.id = instructors.class LEFT OUTER JOIN person ON person.id = instructors.person INNER JOIN person_role ON person_role.person = person.id WHERE person_role.role = 2 GROUP BYperson.id, first_name, last_name ORDER BYtotal_classes; id | total_classes | total_class_count | future_class_count | past_class_count -+---+---++-- 90 | 1 | 1 | 0 |1 98 | 1 | 1 | 0 |1 92 | 1 | 1 | 0 |1 123 | 1 | 1 | 0 |1 122 | 1 | 1 | 0 |1 121 | 2 | 2 | 0 |2 66 | 2 | 2 | 0 |2 74 | 2 | 2 | 0 |2 56 | 2 | 2 | 0 |2 85 | 2 | 2 | 0 |2 119 | 2 | 2 | 0 |2 41 | 2 | 2 | 0 |2 33 | 2 | 2 | 0 |2 65 | 2 | 2 | 0 |2 105 | 3 | 3 | 0 |3 83 | 3 | 3 | 0 |3 102 | 3 | 3 | 0 |3 32 | 4 | 4 | 0 |4 71 | 4 | 4 | 0 |4 70 | 4 | 4 | 0 |4 14 | 4 | 4 | 0 |4 29 | 4 | 4 | 0 |4 77 | 4 | 4 | 0 |4 86 | 4 | 4 | 0 |4 50 | 4 | 4 | 0 |4 107 | 4 | 4 | 0 |4 8 | 4 | 4 | 0 |4 114 | 4 | 4 | 0 |4 42 | 4 | 4 | 0 |4 82 | 4 | 4 | 0 |4 28 | 4 | 4 | 0 |4 17 | 4 | 4 | 0 |4 52 | 4 | 4 | 0 |4 9 | 4 | 4 |
[GENERAL] Timestamp with Timezone
reading the docs . . . let's see if I've got it. 1. Timestamp with timezone accepts a timestamp with the additional timezone, converts it and stores it as GMT 2. It returns the value as the timestamp converted to the timezone of the local machine? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Weird results when using schemas
Michael Fuhr wrote: You refer to shs.city.column in the select list, but in the from clause you've aliased shs.city to city. As the SELECT documentation says, When an alias is provided, it completely hides the actual name of the table or function; for example given FROM foo AS f, the remainder of the SELECT must refer to this FROM item as f not foo. Michael, Thanks a lot for your very helpful answer (thanks to Stephan Szabo as well). I've now changed my query as per your indications and indeed it works ok. Best regards, Geert ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Difference in indexes
Dear all, I'm using a PostgreSQL 8.1.0 dabase on a Fedora Core 3 machine here. In this table there is a table hosts: CREATE TABLE hosts( hostid int4 NOT NULL DEFAULT nextval('hosts_hostid_seq'::regclass), hostip cidr NOT NULL, hostname varchar(50), lastseen timestamp DEFAULT '1970-01-01 01:00:00'::timestamp without time zone, total int4 DEFAULT 0, image varchar(20) DEFAULT 'hosts/unknown.png'::character varying, CONSTRAINT hosts_pkey PRIMARY KEY (hostid)) WITHOUT OIDS;ALTER TABLE hosts OWNER TO root; CREATE INDEX hosts_hostip ON hosts USING btree (hostip); CREATE INDEX hosts_hostname ON hosts USING btree (hostname); When I run 2 queries on this table: select * from hosts where hostname='Fabian' select * from hosts where hostname='Foo' I got 2 differen explain plans: "Seq Scan on hosts (cost=0.00..10.25 rows=21 width=59) (actual time=0.048..0.600 rows=21 loops=1)"" Filter: ((hostname)::text = 'Fabian'::text)""Total runtime: 0.794 ms" "Index Scan using hosts_hostname on hosts (cost=0.00..9.04 rows=2 width=59) (actual time=0.057..0.057 rows=0 loops=1)"" Index Cond: ((hostname)::text = 'Foo'::text)""Total runtime: 0.185 ms" What is happening here? What am I overlooking? The length does not seem to be the problem: 'FooFooFoo' also uses the index.. Also the fact wheneverthere areresults or not does not seem to influence the planner.. Yours, Aarjan Langereis
Re: [GENERAL] Difference in indexes
A.j. Langereis [EMAIL PROTECTED] wrote What is happening here? What am I overlooking? The length does not seem to be the problem: 'FooFooFoo' also uses the index.. Also the fact whenever there are results or not does not seem to influence the planner.. Check out this thread: http://archives.postgresql.org/pgsql-bugs/2005-11/msg00032.php Regards, Qingqing ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Conditional delete
On Mon, 21 Nov 2005, Bartosz Jakubiak wrote: Hi. I'm new with PostgreSQL, but this thing doesn't allow me to sleep: I wonder if it is possible to execute SQL query which: 1. checks out if table exists, and if it is: 2. deletes it All of it at SQL query level, preferrably in one transaction. Something like (MSSQL): IF EXISTS (SELECT * FROM sysobjects WHERE id = object_id(n'properties_branches') AND objectproperty(id, n'isusertable') = 1) DROP TABLE properties_branches or (MySQL) DROP TABLE IF EXISTS properties_branches; This quite looks like rewrite a script auto generated from SQL Server :-) Yes, you can do that by querying the system catalog to see if a table exists (pg_class). However, notice there is possible a race condition here: if you found one table exists, but when you delete it, it may already deleted by others. Regards, Qingqing ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Group By?
I want to take the serial ID of several values in different rows in one table and insert them into a single row of another table. Would the 'group by' command be the best way to do this? Bob
Re: [GENERAL] Difference in indexes
Tnks Qingqing, I've read them! Nevertheless I think there are some differences here: * The difference in the amout of rows is much smaller. Nevertheless the table is small as well: only 100 rows * There is a faster query plan to solve the query: Bitmap Heap Scan on hosts (cost=2.07..11.34 rows=21 width=59) (actual time=0.175..0.287 rows=21 loops=1) Recheck Cond: ((hostname)::text = 'Fabian'::text) - Bitmap Index Scan on hosts_hostname (cost=0.00..2.07 rows=21 width=0) (actual time=0.145..0.145 rows=21 loops=1) Index Cond: ((hostname)::text = 'Fabian'::text) Total runtime: 0.510 ms This result was achieved by setting enable_seqscan to off (postgresql.conf). Turning off enable_bitmapscan as well resulted in a index scan which was even more faster: Index Scan using hosts_hostname on hosts (cost=0.00..37.28 rows=21 width=59) (actual time=0.068..0.281 rows=21 loops=1) Index Cond: ((hostname)::text = 'Fabian'::text) Total runtime: 0.492 ms Yours, Aarjan - Original Message - From: Qingqing Zhou [EMAIL PROTECTED] To: pgsql-general@postgresql.org Sent: Tuesday, November 22, 2005 12:38 AM Subject: Re: [GENERAL] Difference in indexes A.j. Langereis [EMAIL PROTECTED] wrote What is happening here? What am I overlooking? The length does not seem to be the problem: 'FooFooFoo' also uses the index.. Also the fact whenever there are results or not does not seem to influence the planner.. Check out this thread: http://archives.postgresql.org/pgsql-bugs/2005-11/msg00032.php Regards, Qingqing ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Group By?
Converted your message to plain text as preferred on most mailing lists. Bob Pawley wrote: I want to take the serial ID of several values in different rows in one table and insert them into a single row of another table. Would the 'group by' command be the best way to do this? Could you provide an actual example? The wording of your question is a little vague and an example might help solicit an answer to the actual problem. For example, are these serial ID values all in a the same column in the source table? Or is each one in a different column? And what is the selection criteria that brings these results together? -- Guy Rouillier ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
On Mon, Nov 21, 2005 at 02:06:35PM -0800, Bill Moseley wrote: The now working query (thanks to you!) is: No that doesn't work. It's dropping the people that have never been assigned a class to teach (i.e. don't have a row in the instructors link table). FROM class INNER JOIN instructors ON class.id = instructors.class LEFT OUTER JOIN person ON person.id = instructors.person, person_role I really seem to need the multiple left outer join. This works: SELECT person.id AS id, last_name, person_role.role AS role, count(instructors.class), sum (CASE WHEN class_time now() THEN 1 ELSE 0 END) as future_class_count, sum (CASE WHEN class_time = now() THEN 1 ELSE 0 END) as past_class_count FROM person LEFT OUTER JOIN instructors ON (person.id = instructors.person) LEFT OUTER JOIN class ON (instructors.class = class.id), person_role WHERE person_role.person = person.id -- AND person_role.role = 2 GROUP BY person.id, last_name, person_role.role; I'm not clear how to move that person_role.person = person.id into the FROM statement. Does it matter? -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Best way to represent values.
I have several fields that needs to be within a table but not real sure the best way to represent this data. The scenario I'm trying to incorporate is a form that has to be filled out went a well is drilled and then sent to the state. One of the things the form wants is the different depths various types of underburden was encountered. So you start with topsoil that goes from 0 feet to X depth, then say at that X depth clay is encountered to depth Y and then at depth Y gravel is encountered and water is found. I had thought just adding some fields called topsoil_start/topsoil_end, gravel_start/gravel_end, etc. But them I'm left with how to take those values and give to total depth for each layer and total depth of the well. But I'm not sure that is the best way to handle this. Does anyone have some other suggestions? BTW, thanks to all that submitted some recommended PostgreSQL books. I have started with Beginning Databases with PostgreSQL. It's been very helpful. I plan to pursue the other recommendations. -- You can tuna piano but you can't tune a fish. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SQL Help: Multiple LEFT OUTER JOINs
I'm not clear how to move that person_role.person = person.id into the FROM statement. Does it matter? This should work: FROM person INNER JOIN person_role ON person.id = person_role.person LEFT OUTER JOIN instructors ON (person.id = instructors.person) LEFT OUTER JOIN class ON (instructors.class = class.id), person_role GROUP BY person.id, last_name, person_role.role; The reason *I* think it matters is that I like to keep my constant join clauses in the from, and my variable criteria in the where. i.e. in your query, you always want the role that joins to the person...there are no cases (in this query) where you want other roles, therefore I wouldn't put it in the where. I leave my where clause free for criteria such as where person.id = $variable etc. I will occasionally put variables higher in my joins, but only if EXPLAIN ANALYZE tells me it's faster (which rarely happens for me), but I NEVER wait until my where to clarify the fundamental criteria whereby a table in the FROM is joined. It's just my preference for the sake of clarity. In other words I would: SELECT person.last_name, person_role.role FROM person_role INNER JOIN person ON person_role.person = person.id WHERE person.id = $variable rather than: SELECT person.last_name, person_role.role FROM person_role, person WHERE person_role.person = person.id AND person.id = $variable ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Best way to represent values.
On Nov 22, 2005, at 3:19 , Dennis Veatch wrote: I had thought just adding some fields called topsoil_start/ topsoil_end, gravel_start/gravel_end, etc. But them I'm left with how to take those values and give to total depth for each layer and total depth of the well. But I'm not sure that is the best way to handle this. Does anyone have some other suggestions? This is similar in concept to temporal intervals. You might want to look at Temporal Data and the Relational Model by Date, Darwen, and Lorentzos for general theory, and Developing Time-Oriented Database Applications by Richard Snodgrass for implementations in SQL. The latter is available as a PDF download (the book itself is out of print): http://www.cs.arizona.edu/people/rts/tdbbook.pdf Hope this helps! Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Rule appears not to fire on insert w/ except
Chris Kratz [EMAIL PROTECTED] writes: Hello All, We have finally tracked down a bug in our application to a rewrite rule on a table. In essence, the rewrite rule in question logs any inserts to another table. This works correctly in all cases except where an except clause is used in the insert statement. In this case, the rows are inserted into the primary table as expected, but the rule either does not fire, or fires in such a way that nothing is placed in the changes table. You must be referring to something like; insert into foo select * from sometable except select * from someothertable ; If there's an EXCEPT clause on INSERT, I've never seen it. Perhaps you should post your insert query and your rule declaration. As a side note, is there a way to see the final sql after all rewrite rules have been processed? It might help us understand what is going on. Not SQL but see config setting; debug_print_rewritten -- --- Jerry Sievers 305 854-3001 (home) WWW ECommerce Consultant 305 321-1144 (mobilehttp://www.JerrySievers.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Rule appears not to fire on insert w/ except
On Mon, Nov 21, 2005 at 08:05:19PM -0500, Jerry Sievers wrote: Chris Kratz [EMAIL PROTECTED] writes: Hello All, We have finally tracked down a bug in our application to a rewrite rule on a table. In essence, the rewrite rule in question logs any inserts to another table. This works correctly in all cases except where an except clause is used in the insert statement. In this case, the rows are inserted into the primary table as expected, but the rule either does not fire, or fires in such a way that nothing is placed in the changes table. You must be referring to something like; insert into foo select * from sometable except select * from someothertable ; If there's an EXCEPT clause on INSERT, I've never seen it. I suppose you could wrap the SELECT...EXCEPT in parens. WARNING Untested Code: INSERT INTO foo (SELECT a,b,c FROM bar EXCEPT SELECT a,b,c FROM baz); HTH :) Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] About not to see insertion result INSERT 0 1
Emi Lu presumably uttered the following on 11/21/05 15:40: Greetings, I tried to run insert command from a .sql file. For example, in a.sql file there are 100,000 lines like insert into t1 values(... ...); insert into t1 values(... ...); insert into t1 values(... ...); ... ... I do not want to see the 100,000 times INSERT 0 1 displayed by postgreSQL. Is there a way to hide the output INSERT 0 1 generated by postgresql ? Thanks a lot! Emi Depending on how your application works, you would probably see a marked speed increase by using the copy command rather than 100k insert lines. The file could just be your Values(...) section which could then be sucked in via copy. Sven ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Difference in indexes
A.j. Langereis [EMAIL PROTECTED] wrote Bitmap Heap Scan on hosts (cost=2.07..11.34 rows=21 width=59) (actual time=0.175..0.287 rows=21 loops=1) Recheck Cond: ((hostname)::text = 'Fabian'::text) - Bitmap Index Scan on hosts_hostname (cost=0.00..2.07 rows=21 width=0) (actual time=0.145..0.145 rows=21 loops=1) Index Cond: ((hostname)::text = 'Fabian'::text) Total runtime: 0.510 ms This result was achieved by setting enable_seqscan to off (postgresql.conf). Turning off enable_bitmapscan as well resulted in a index scan which was even more faster: Index Scan using hosts_hostname on hosts (cost=0.00..37.28 rows=21 width=59) (actual time=0.068..0.281 rows=21 loops=1) Index Cond: ((hostname)::text = 'Fabian'::text) Total runtime: 0.492 ms If you compare the difference among the *estimated* cost (cost=0.00 ..): seqscan: cost=0.00..10.25 Bitmap: cost=2.07..11.34 indexscan: cost=0.00..37.28 Then you will know why the optimizer prefers sequential scan. Yes, in your case, the *real* cost(actual time = ...) is quite different from the estimated cost. That's because the optimizer can't collect enough information of the environment at execution. For example, the optimizer does not know if a data page is in buffer or not(which will incurs IO cost) and it always assumes not. There is a long story about the why the optimizer does this. In short, since PG uses small buffer pool and the optimizer is mainly useful for big tables, so this assumption is reasonable -- but for small tables, may not that good. Regards, Qingqing ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Best way to represent values.
On Monday 21 November 2005 20:04, Michael Glaesemann wrote: On Nov 22, 2005, at 3:19 , Dennis Veatch wrote: I had thought just adding some fields called topsoil_start/ topsoil_end, gravel_start/gravel_end, etc. But them I'm left with how to take those values and give to total depth for each layer and total depth of the well. But I'm not sure that is the best way to handle this. Does anyone have some other suggestions? This is similar in concept to temporal intervals. You might want to look at Temporal Data and the Relational Model by Date, Darwen, and Lorentzos for general theory, and Developing Time-Oriented Database Applications by Richard Snodgrass for implementations in SQL. The latter is available as a PDF download (the book itself is out of print): http://www.cs.arizona.edu/people/rts/tdbbook.pdf Hope this helps! Hee, well that's um, kinda over my head. Hee and I'm not all the way through the PostgreSQL book I just bought. There's probably a gap there. :) Though I will try to glean something out of the link. -- You can tuna piano but you can't tune a fish. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] not null error in trigger on unrelated column
I have a members table and an items table. Every time items.member_id or items.active gets changed, members.items_submitted and members.items_approved gets updated by a trigger on items. I added an admin column to members, and now this happens: = update items set active = false where member_id=38; ERROR: null value in column admin violates not-null constraint CONTEXT: SQL statement update members set items_approved=items_approved-1 where id= $1 PL/pgSQL function update_member_item_counts line 54 at SQL statement The relevant part of the (after) trigger function on items is: ELSIF OLD.active is true and NEW.active is false then update members set items_approved=items_approved-1 where id=NEW.member_id; END IF; Is it necessary to drop and recreate triggers and/or corresponding functions after changing a table's schema? I don't know how something is trying to set members.admin to null (table members has no triggers). thanks csn Postgresql 8.0.x __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Createlang plpgsql
I see the problem now. On a shared host the createuser and createdb are handled by cPanel scripts but not (as yet on my host) createlang.The createlang utility only installs procedural languages included in the default PostgreSQL distribution, which fortunately includes plpgsql language I need.If the language is added to template1 then all future databases—including those of other shared users—would have the language installed by default and this—probably for reasons that all the languages are not installed by default—might not be desirable. So what I need to solve the problem is a cPanel script to install the required procedural languages or alternately—I have to ask the support people to add any required languages when I create a new database. Unless I have not fully understood the problem then clearly this issue will have to be solved in order to use all of the advanced features of PostgreSQL to be available on shared servers.Can the createlang utility be made available to an ordinary user just to apply the lang to his database ? On Nov 19, 2005, at 3:18 AM, Tino Wildenhain wrote:Am Freitag, den 18.11.2005, 15:29 -0800 schrieb Jeremy Sellors: Hi,How can I 'createlang plpgsql database' on a shared server? on my ownmachine I just used 'su postgres' and 'createlang plpgsql template1'but I need to have plpgsql available on a shared server. Shared or not, the dba has to create it. If you are dba (crateuser,createdb) you can create the language. If not, ask your dba to doit for you. (Maybe not in template1 but in your actual database).---(end of broadcast)---TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Group By?
On Mon, Nov 21, 2005 at 15:53:15 -0800, Bob Pawley [EMAIL PROTECTED] wrote: I want to take the serial ID of several values in different rows in one table and insert them into a single row of another table. Would the 'group by' command be the best way to do this? From your description I think it is more likely you want to use subselects. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Group By?
Here's what I want to do. Table control contains values (mon and valves) that are associated by numbers inserted into the associated column. I want to transfer the serial _id number of the items associated by the value '1' into the appropriate columns of the first row of the table auto_control. All items associated with the value '2' into the second row - etc. etc. Is this best accomplished by a 'group by' command or subset??? Bob Control device_ID type association serial varchar int4 1 mon 1 2 valve 2 3 valve 1 4 mon 2 5 valve 1 Auto_control loop_id mon valve valve serial int4 int4 int4 1 1 3 5 2 2 4 - Original Message - From: Guy Rouillier [EMAIL PROTECTED] To: Postgre General pgsql-general@postgresql.org Sent: Monday, November 21, 2005 4:25 PM Subject: Re: [GENERAL] Group By? Converted your message to plain text as preferred on most mailing lists. Bob Pawley wrote: I want to take the serial ID of several values in different rows in one table and insert them into a single row of another table. Would the 'group by' command be the best way to do this? Could you provide an actual example? The wording of your question is a little vague and an example might help solicit an answer to the actual problem. For example, are these serial ID values all in a the same column in the source table? Or is each one in a different column? And what is the selection criteria that brings these results together? -- Guy Rouillier ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Group By?
On Mon, Nov 21, 2005 at 21:53:10 -0800, Bob Pawley [EMAIL PROTECTED] wrote: Here's what I want to do. Table control contains values (mon and valves) that are associated by numbers inserted into the associated column. I want to transfer the serial _id number of the items associated by the value '1' into the appropriate columns of the first row of the table auto_control. All items associated with the value '2' into the second row - etc. etc. You don't really want to do that. Tables have fixed numbers of columns and what you want to do doesn't result in a fixed number of columns. If you want to generate a report with that format, then I think there is a contrib module (crosstabs?) that will do this kind of thing. You could also have a report app do it for you. In the report app method, you would be best to return rows ordered by association and then device_ID and have the app check for when the association value changes. Is this best accomplished by a 'group by' command or subset??? Bob Control device_ID type association serial varchar int4 1 mon 1 2 valve 2 3 valve 1 4 mon 2 5 valve 1 Auto_control loop_id mon valve valve serial int4 int4 int4 1 1 3 5 2 2 4 - Original Message - From: Guy Rouillier [EMAIL PROTECTED] To: Postgre General pgsql-general@postgresql.org Sent: Monday, November 21, 2005 4:25 PM Subject: Re: [GENERAL] Group By? Converted your message to plain text as preferred on most mailing lists. Bob Pawley wrote: I want to take the serial ID of several values in different rows in one table and insert them into a single row of another table. Would the 'group by' command be the best way to do this? Could you provide an actual example? The wording of your question is a little vague and an example might help solicit an answer to the actual problem. For example, are these serial ID values all in a the same column in the source table? Or is each one in a different column? And what is the selection criteria that brings these results together? -- Guy Rouillier ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Createlang plpgsql
Am Montag, den 21.11.2005, 20:12 -0800 schrieb Jeremy Sellors: I see the problem now. On a shared host the createuser and createdb are handled by cPanel scripts but not (as yet on my host) createlang. The createlang utility only installs procedural languages included in the default PostgreSQL distribution, which fortunately includes plpgsql language I need. If the language is added to template1 then all future databases— including those of other shared users—would have the language installed by default and this—probably for reasons that all the languages are not installed by default—might not be desirable. So what I need to solve the problem is a cPanel script to install the required procedural languages or alternately—I have to ask the support people to add any required languages when I create a new database. Unless I have not fully understood the problem then clearly this issue will have to be solved in order to use all of the advanced features of PostgreSQL to be available on shared servers. Can the createlang utility be made available to an ordinary user just to apply the lang to his database ? Shouldnt you rather ask your hoster? ++Tino ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match