[SQL] Inserting binary data (BLOBs) in v7.1
Has somebody insert binary data (BLOBs) in a row in PostgreSQL v7.1? Is the correct data type to do it "VARCHAR(65535)"? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Re: pg_dumpall and password access
Tom Lane <[EMAIL PROTECTED]> wrote: >pg_dumpall doesn't work very well with password authentication (and >even if it did, storing the password in a cron script doesn't seem >like a good idea to me). > >As long as the dumper will run on the same machine as the database >server, consider using IDENT authorization instead. Another solution is to use the patched pg_dumpall at http://bugs.debian.org/87035 . Ray -- The Internet interprets attempts at proprietary control as damage and routes around it. Eric S. Raymond ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] pg_dumpall and password access
At 19.29 3/4/01 -0400, you wrote: >Christophe Labouisse <[EMAIL PROTECTED]> writes: > > I'm trying to run pg_dumpall to backup all my users' bases but since I > > have configure pg_hba.conf to "passwd" pg_dumpall always fails: > >pg_dumpall doesn't work very well with password authentication (and >even if it did, storing the password in a cron script doesn't seem >like a good idea to me). From the cron script you can execute somethin like this. su -l postgres -c pg_dumpall and then, "pg_dumpall" will be executed by the postgres user. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] performance of functions - or rather lack of it
We are building a postgresql based backend database for our 'hosting provisioning' system. In a vain attempt to add some, what I thought, simple performance tweaks, I thought I would try putting some of the larger and more straighforward queries into functions. For everything else the same, the functions are on the whole slower. Should they be ? The whole thing is being driver through perl DBI. This may be contributory. Anyhow, the original query: SELECT COUNT(mb.instance) FROM domain dm, mail ms, mailbox mb WHERE dm.enabled = true and dm.existent = true and dm.available = true AND ms.enabled = true and ms.existent = true and ms.available = true AND mb.enabled = true and mb.existent = true and mb.available = true AND dm.parent = ms.parent AND mb.parent = ms.serviceid AND dm.instance = $q_domain AND mb.instance = $q_local_part; where $q_XXX are quoted perl scalars. The function is then: CREATE FUNCTION mail_is_mailbox(text, text) RETURNS int4 AS ' SELECT COUNT(mb.instance) FROM domain dm, mail ms, mailbox mb WHERE dm.enabled = true and dm.existent = true and dm.available = true AND ms.enabled = true and ms.existent = true and ms.available = true AND mb.enabled = true and mb.existent = true and mb.available = true AND dm.parent = ms.parent AND mb.parent = ms.serviceid AND dm.instance = $2 AND mb.instance = $1;' LANGUAGE 'sql'; SELECT mail_is_mailbox($q_local_part, $q_domain); Running both these 1000 times from a remote (same subnet 100BaseTX) client with the same query results in time for the function typically 20 - 25% more than the bare query. 22 vs 16 seconds for example. I would have thought that not sending the long SQL across the wire 1000 times would have saved some time even without any potential query optimisations by pre-parsing the SQL ? rgds, -- Peter Galbavy Knowledge Matters Ltd. http://www.knowledge.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] performance of functions - or rather lack of it
BTW The service is 7.0.2 and the client 7.1RC1 and the OSes are OpenBSD/i386 2.8-stable. On Wed, Apr 04, 2001 at 11:12:34AM +0100, Peter Galbavy wrote: > We are building a postgresql based backend database for our 'hosting > provisioning' system. In a vain attempt to add some, what I thought, simple > performance tweaks, I thought I would try putting some of the larger and > more straighforward queries into functions. For everything else the same, > the functions are on the whole slower. Should they be ? -- Peter Galbavy Knowledge Matters Ltd http://www.knowledge.com/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] restoring indices
Hi Postgres people ;-) This is probably a simple question, still I need to know: When restoring data using dumps 1. Will your indices be restored using copy dumps? 2. Does vacuumdb restore them? 3. If vacuumdb does not, is there something which does? Kind regards, Koen Antonissen ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] outer joins
Hi all, I'm new to postgre, I've changed my work and consequently now i'm moving from MS plaform. In MS SQL there are such constructs left or right outer join, in postgres there are no such thing Can You offer me strategy to make query that selects from table (a) and joins to it another (b) on e.g. a.id=b.aid but joins so that in case there is no value in table b NULL is left in result: in MS SQL i've used query: select a.id, b.name from a left outer join b on a.id=b.aid table a table b id | aid | name --- 11 | Tom 23 | Sam result: a.id | b.name - 1 | Tom 2 | NULL thank you in advance Algirdas Šakmanas IT manager +370 99 90369 [EMAIL PROTECTED] Grafton Entertainment http://www.tvnet.lt ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] max( bool )?
Question and a problem: I have this query select distinct not exists ( select sc1caption from tblstsc1 where (tblstsc1options.surid like surid || '.%' or surid=tblstsc1options.surid) and surid!=tblstsc1options.surid and sc1caption is not null ) from tblstsc1options where '164' like surid || '.%' or surid like '164' || '.%' ; Which with current data returns two records, one true and one false. What I want to know is if any of the records (fields) are true. It doesn't appear max() will do this: ERROR: Unable to select an aggregate function max(bool) What can I do? FYI, this query will be run from within a trigger function (plpgsql). Now for the problem.. In attempting to get what I want, I wrote this: select not exists (select distinct max((select sc1caption from tblstsc1 where (tblstsc1options.surid like surid || '.%' or surid=tblstsc1options.surid) and surid!=tblstsc1options.surid and sc1caption is not null)) from tblstsc1options where '164' like surid || '.%' or surid like '164' || '.%'); And I get this notice: NOTICE: PortalHeapMemoryFree: 0x0x824a6e8 not in alloc set! NOTICE: Buffer Leak: [059] (freeNext=54, freePrev=58, relname=tblstsc1, blockNum=0, flags=0x14, refcount=-4 -1) Thanks, -Cedar ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] [7.0.3] optimizing a LIKE query ...
Can someone suggest how to improve the following query, so as to make the LIKE section operate on the results found by the = one? SELECT ndict.url_id,ndict.intag FROM ndict,url WHERE ndict.word_id=-720551816 AND url.rec_id=ndict.url_id AND ((url.url || '') LIKE '%http://www.postgresql.org/%%') ndict.word_id=-720551816 returns 5895 records ((url.url || '') LIKE '%http://www.postgresql.org/%%') - returns 138k records explain shows: NOTICE: QUERY PLAN: Hash Join (cost=10163.01..26647.09 rows=42 width=12) -> Index Scan using n_word on ndict (cost=0.00..16299.52 rows=4180 width=8) -> Hash (cost=10159.53..10159.53 rows=1390 width=4) -> Seq Scan on url (cost=0.00..10159.53 rows=1390 width=4) EXPLAIN I'm figuring that if I can somehow get the query (using subselects, maybe?), to have the LIKE part of the query work only on the 6k records returned by the "=" part of it, the overall results should be faster ... Possible? Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: [EMAIL PROTECTED] secondary: scrappy@{freebsd|postgresql}.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] outer joins
Hello, I believe these are supported in 7.1 On Wed, 4 Apr 2001, [iso-8859-4] Algirdas ©akmanas wrote: >Hi all, > >I'm new to postgre, I've changed my work and consequently now i'm moving >from MS plaform. >In MS SQL there are such constructs left or right outer join, in postgres >there are no such thing > >Can You offer me strategy to make query that selects from table (a) and >joins to it another (b) >on e.g. a.id=b.aid but joins so that in case there is no value in table b >NULL is left in result: >in MS SQL i've used query: > >select a.id, b.name from a left outer join b on a.id=b.aid > >table a table b > >id | aid | name >--- >11 | Tom >23 | Sam > >result: >a.id | b.name >- >1 | Tom >2 | NULL > >thank you in advance > >Algirdas ©akmanas >IT manager >+370 99 90369 >[EMAIL PROTECTED] >Grafton Entertainment >http://www.tvnet.lt > > > >---(end of broadcast)--- >TIP 2: you can get off all lists at once with the unregister command >(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > -- -- CommandPrompt - http://www.commandprompt.com OpenDocs, LLC. - http://www.opendocs.org LinuxPorts - http://www.linuxports.com LDP - http://www.linuxdoc.org -- Instead of asking why a piece of software is using "1970s technology," start asking why software is ignoring 30 years of accumulated wisdom. -- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
RE: [SQL] outer joins
I've read that the version 7.1 provides outer join feature. Since I did not try it yet, I've no more information about it. > -Message d'origine- > De: Algirdas Sakmanas [SMTP:[EMAIL PROTECTED]] > Date: mercredi 4 avril 2001 13:03 > À:[EMAIL PROTECTED] > Objet:[SQL] outer joins > > Hi all, > > I'm new to postgre, I've changed my work and consequently now i'm moving > from MS plaform. > In MS SQL there are such constructs left or right outer join, in postgres > there are no such thing > > Can You offer me strategy to make query that selects from table (a) and > joins to it another (b) > on e.g. a.id=b.aid but joins so that in case there is no value in table b > NULL is left in result: > in MS SQL i've used query: > > select a.id, b.name from a left outer join b on a.id=b.aid > > table a table b > > id | aid | name > --- > 11 | Tom > 23 | Sam > > result: > a.id | b.name > - > 1 | Tom > 2 | NULL > > thank you in advance > > Algirdas Sakmanas > IT manager > +370 99 90369 > [EMAIL PROTECTED] > Grafton Entertainment > http://www.tvnet.lt > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[SQL] Query broken under 7.1RC2
This query worked under 7.0.3 but yields an error under 7.1RC1 or RC2. The error message is: psql:outbug.sql:43: ERROR: Sub-SELECT uses un-GROUPed attribute h.ordnum from outer query Tom, you patched util/clauses.c (near line 540) a while back to prevent the same error message on a different query. This may be related. The SQL is a little strange because the subquery tries to reference individual records from the outer query and then sum them. The more I look at it, I wonder if it is not just bad SQL. But the last one I found like this turned out to reveal a bug, so here it is: drop table ord_hdr; drop table cinv_hdr; drop table cinv_items; drop table inc_link; create table ord_hdr ( ordnum int4, proj int4 ); create table cinv_hdr ( ordnum int4, hinum int4, status varchar ); create table cinv_items ( ordnum int4, hinum int4, quant int4, unit_price numeric(12,2) ); create table inc_link ( ordnum int4, hinum int4, amount numeric(12,2) ); select sum(i.quant*i.unit_price::float8), (select coalesce(sum(amount),0) from inc_link where ordnum = h.ordnum and hinum = h.hinum) from cinv_hdr h, cinv_items i, ord_hdr o where o.ordnum = h.ordnum and h.ordnum = i.ordnum and h.hinum = i.hinum and o.proj = 1051 and h.status = 'open' ; begin:vcard n:Bateman;Kyle x-mozilla-html:FALSE org:Action Target Inc. adr:;; version:2.1 email;internet:[EMAIL PROTECTED] title:President x-mozilla-cpt:;0 fn:Kyle Bateman end:vcard ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] max( bool )?
Cedar Cox <[EMAIL PROTECTED]> writes: > Now for the problem.. In attempting to get what I want, I wrote this: > select not exists (select distinct max((select sc1caption from tblstsc1 > where (tblstsc1options.surid like surid || '.%' or > surid=tblstsc1options.surid) and surid!=tblstsc1options.surid and > sc1caption is not null)) from tblstsc1options where '164' like surid || > '.%' or surid like '164' || '.%'); > And I get this notice: > NOTICE: PortalHeapMemoryFree: 0x0x824a6e8 not in alloc set! > NOTICE: Buffer Leak: [059] (freeNext=54, freePrev=58, relname=tblstsc1, > blockNum=0, flags=0x14, refcount=-4 -1) (a) What Postgres version is this? (b) Could we see the schemas for the tables? (pg_dump -s output is the best way) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] outer joins
Algirdas, This should do the trick: SELECT a.id,b.name FROM a,b WHERE a.id=b.id UNION SELECT id,null FROM a WHERE id NOT IN (SELECT id FROM b); Troy > > Hi all, > > I'm new to postgre, I've changed my work and consequently now i'm moving > from MS plaform. > In MS SQL there are such constructs left or right outer join, in postgres > there are no such thing > > Can You offer me strategy to make query that selects from table (a) and > joins to it another (b) > on e.g. a.id=b.aid but joins so that in case there is no value in table b > NULL is left in result: > in MS SQL i've used query: > > select a.id, b.name from a left outer join b on a.id=b.aid > > table a table b > > id | aid | name > --- > 11 | Tom > 23 | Sam > > result: > a.id | b.name > - > 1 | Tom > 2 | NULL > > thank you in advance > > Algirdas ©akmanas > IT manager > +370 99 90369 > [EMAIL PROTECTED] > Grafton Entertainment > http://www.tvnet.lt > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Strategy for unlocking query
I have just done a rather large transaction via a telnet/psql session which executed OK. The problem occured when the telnet session timed out before I could commit the rows. This must have locked the rows in question because when I tried to vacuum the table it just hung. What is the best way of dealing with this problem as I ended up stopping and restarting the postmaster? Also are functions within functions dealt with in a 'transactional' sense? Cheers Graham winmail.dat ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[SQL] Re: Query broken under 7.1RC2
Kyle <[EMAIL PROTECTED]> writes: > The SQL is a little strange because the subquery tries to reference > individual records from the outer query and then sum them. > The more I look at it, I wonder if it is not just bad SQL. It is bad SQL, but pre-7.1 Postgres failed to catch it. Here's the CVS log entry: 2000-09-25 14:09 tgl * src/backend/optimizer/plan/planner.c: System neglected to complain about ungrouped variables passed to sublinks when outer query contained aggregates but no GROUP clause. Since you have a SUM(), the outer query is a grouped query, and so references to the ungrouped h.* columns in the SELECT targetlist are not well defined. Lord knows what result you were getting from 7.0 ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Historical dates in Timestamp
Saluton, I have a database with dates, some of which are historical dates. When I wanted to enter May 28th, 812 I got an error message, had to use 0812 for the year instead: albert=> CREATE DATABASE test; CREATE DATABASE albert=> \c test You are now connected to database test. test=> CREATE TABLE dates (d timestamp); CREATE test=> insert into dates (d) values ('812-5-28'); ERROR: Bad timestamp external representation '812-5-28' test=> insert into dates (d) values ('0812-5-28'); INSERT 81801 1 The same happens for BC dates: test=> insert into dates (d) values ('812-5-28 BC'); ERROR: Bad timestamp external representation '812-5-28 BC' test=> insert into dates (d) values ('0812-5-28 BC'); INSERT 81802 1 Is it really reasonable to enforce that the number of years is four digits at least? I'm running: test=> SELECT version(); version PostgreSQL 7.0.2 on i586-pc-linux-gnulibc1, compiled by gcc 2.95.1 (1 row) Bye, Albert. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Strategy for unlocking query
"Graham Vickrage" <[EMAIL PROTECTED]> writes: > I have just done a rather large transaction via a telnet/psql session which > executed OK. The problem occured when the telnet session timed out before I > could commit the rows. > This must have locked the rows in question because when I tried to vacuum > the table it just hung. Probably the backend process was still there and didn't yet realize that the client connection had died. It would notice eventually, but I think the timeout involved is typically an hour or so (this is determined by the TCP protocols and isn't under our control). > What is the best way of dealing with this problem as I ended up stopping and > restarting the postmaster? It would've been sufficient to find and SIGTERM the individual backend from the lost session. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Historical dates in Timestamp
"Albert REINER" <[EMAIL PROTECTED]> writes: > Is it really reasonable to enforce that the number of years is four > digits at least? I believe so. Without that cue it's pretty difficult for the timestamp parser even to figure out which field is intended to be the year, let alone whether you'd like 1900 or 2000 added to a two-digit year value. For example, with the default datestyle: regression=# select '12-5-28'::timestamp; ?column? 2028-12-05 00:00:00-05 (1 row) regression=# select '0012-5-28'::timestamp; ?column? - 0012-05-28 00:00:00 (1 row) regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] performance of functions - or rather lack of it
"Peter Galbavy" <[EMAIL PROTECTED]> writes: > We are building a postgresql based backend database for our 'hosting > provisioning' system. In a vain attempt to add some, what I thought, simple > performance tweaks, I thought I would try putting some of the larger and > more straighforward queries into functions. For everything else the same, > the functions are on the whole slower. Should they be ? Possibly. In your example, the planner sees dm.instance and mb.instance being compared to known literal values when you execute the statement directly, but to unknown values (function parameters) when you use a function. This might shift the selectivity estimates enough to result in choice of a different query plan, which could result in speedup or slowdown depending on how close to reality the estimates are. Without knowing which PG version you're using, what plans you're getting, or even whether you've VACUUM ANALYZEd lately, it's difficult to say more than that. > I would have thought that not sending the long SQL across the wire 1000 > times would have saved some time even without any potential query > optimisations by pre-parsing the SQL ? Unless your TCP connection is running across tin cans and string, the transfer time for the query text is negligible ... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Memory and performance
Hi all, I have noted that Postgresql don't make a good memory handle. I have made the tables/procedure (in attached file) and run it as "select bench(10, 5000)". This will give a 5 records inserts (5 x 1). (well, I run it on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull machine, you can try other values). I get as result, the following times: id | objname | benchtime +-+--- 1 | group 1 | 00:00:32 2 | group 2 | 00:00:47 3 | group 3 | 00:01:13 4 | group 4 | 00:01:41 5 | group 5 | 00:02:08 (5 rows) Note that, with memory increse, the system becomes slow, even if the system has free memory to alocate (yes, 64MB is enough to this test). I didn't see the source code (yet), but I think that the data estructure used to keep the changed records is a kind of chained list; and to insert a new item, you have to walk to the end of this list. Can it be otimized? The system that I'm developing, I have about 25000 (persons) x 8 (exams) x 15 (answers per exam) = 300 records to process and it is VERY SLOW. thanks, Edipo Elder [[EMAIL PROTECTED]] _ Oi! Você quer um iG-mail gratuito? Então clique aqui: http://www.ig.com.br/paginas/assineigmail.html teste.zip ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[SQL] Need to do an ALTER TABLE.
Hi, I've got a live database running PSQL 7.0.3. I need to do a couple of changes to some of the table schema's but need to preserve the data that currently exists in the tables. I've used pg_dump to make backups and have verified that I can indeed restore into an empty database from those backups; I'm glad I did this as I found out I have to use 'pg_dump -d' to get a useful backup. =) I need to perform three slightly different ALTER TABLE type jobs: 1. I need to change a column from numeric(8,2) to text. 2. I need to add a column or two to a couple of tables. 3. I need to drop a column or two from a couple of tables. Initially my thought was to create a new temporary table, SELECT INTO it from my original table, drop the original table, re-create it as I need it and do a SELECT INTO from the temporary table back to the new table. The thing I'm unsure of is what will happen to referential integrity? The tables I need to modify are referenced by other tables- will those other tables realise that they should re-establish foreign key references? If not automatically, will VACUUM ANALYZE do this for me? I'm going to experiment in my test database but figured this might be an interesting topic to discuss anyway. Also, if any good advice/answers exist perhaps they should go in the FAQ? Any suggestions would be appreciated. Cheers, Jamu. -- Jamu Kakar (Developer) Expressus Design Studio, Inc. [EMAIL PROTECTED]708-1641 Lonsdale Avenue V: (604) 903-6994 North Vancouver, BC, V7M 2J5 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Memory exhaustion
marketingbyoffice is a view. I start the postmaster, then issue the following query in psql console. It goes to sleep forever:-) Why? The log is as follows: --log - 010404.13:49:14.612 [1352] StartTransactionCommand 010404.13:49:14.612 [1352] query: SELECT * INTO TEMP TABLE marketingbylocation FROM marketingbyoffice; 010404.13:49:14.624 [1352] ProcessQuery 010404.13:52:28.691 [1352] FATAL 1: Memory exhausted in AllocSetAlloc() 010404.13:52:32.568 [1352] AbortCurrentTransaction __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Need to do an ALTER TABLE.
[EMAIL PROTECTED] writes: > I'm glad I did this as I found out I have > to use 'pg_dump -d' to get a useful backup. =) Why? > The tables I need to modify are referenced by other > tables- will those other tables realise that they should re-establish > foreign key references? If not automatically, will VACUUM ANALYZE do > this for me? No, and no :-(. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] searching for dates
I am trying to select from a table all rows that have a date befor 1/1/2001 or after some date I am not sure how to do it. I try select date_part('year', start_date) from sometable; and that works but how do I get it to only show me the years between 1990 and 2001 or some othere set of dates. I would realy like to be able to just have a function to tell me if a date in my datebase is befor or after a date? Thanks for any help. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] searching for dates
Birgit Jansen wrote: >I am trying to select from a table all rows that have a date befor >1/1/2001 or after some date >I am not sure how to do it. >I try >select date_part('year', start_date) from sometable; >and that works but how do I get it to only show me the years between >1990 and 2001 or some >othere set of dates. > >I would realy like to be able to just have a function to tell me if a >date in my datebase is >befor or after a date? SELECT * FROM table WHERE start_date BETWEEN '1990-01-01' AND '2001-12-31'; -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C "Thou will keep him in perfect peace, whose mind is stayed on thee, because he trusts in thee." Isaiah 26:3 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Re: Memory exhaustion
Srikanth Rao <[EMAIL PROTECTED]> wrote: >marketingbyoffice is a view. How is it defined? [*] >010404.13:52:28.691 [1352] FATAL 1: Memory exhausted in AllocSetAlloc() Have you followed the suggestion at http://www.postgresql.org/docs/faq-english.html#4.19 ? HTH, Ray [*] Not just out of idle curiosity; I'm aware of at least one operator/function that can trigger this behaviour in 7.0.3; see http://bugs.debian.org/92705 -- What is this talk of software 'releases'? Klingons do not 'release' software; our software ESCAPES, leaving a bloody trail of designers and quality assurance people in its wake! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] UNION in a VIEW?
I have a number of views that I'm bringing over from DB2 which have UNIONs in them. Some of the UNIONs have joins. The views are not working as expected (I'm running 7.1RC2). It appears that the where clause being applied to the view by the user is not being distributed properly to the selects. It's probably easiest to give an example: There are two tables, itinerary and conjunctive. A conjunctive is for an itinerary which has > 4 stops (this is imposed by the source data, so we have to preserve the relationships). create table itinerary as ( ticket_nbr, origin, dest, flight_nbr ); create conjunctive as ( ticket_nbr, -- the original ticket number conj_ticket_nbr -- the ticket which extends it ); I've trimmed them a bit for clarity. What the view does is this: create view ticket_conj as ( select ticket_nbr, origin, dest, flight_nbr from itinerary union select c.ticket_nbr i.origin, i.dest, i.flight_nbr from itinerary i inner join conjunctive c on i.ticket_nbr = c.conj_ticket_nbr ); Then we issue queries of this form: select * from ticket_conj where ticket_nbr = '9483'; Sadly, PostgreSQL goes off and munches for a *long* time, whereas DB2 and SQL Server return the desired results promptly. If I write a query like the view, but giving each select a where clause, it works (and faster than the other DBs). It really looks to me (and I am *not* a backend guru) that the where clause is not being bound to the desired value (c.ticket_nbr) in the second select. Does anyone have any ideas on this? Thanks, Gordon. -- It doesn't get any easier, you just go faster. -- Greg LeMond ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] max( bool )?
On Wed, 4 Apr 2001, Tom Lane wrote: > Cedar Cox <[EMAIL PROTECTED]> writes: > > And I get this notice: > > > NOTICE: PortalHeapMemoryFree: 0x0x824a6e8 not in alloc set! > > NOTICE: Buffer Leak: [059] (freeNext=54, freePrev=58, relname=tblstsc1, > > blockNum=0, flags=0x14, refcount=-4 -1) > > (a) What Postgres version is this? > > (b) Could we see the schemas for the tables? (pg_dump -s output is the > best way) Ver 7.0.2 (sorry, I meant to include this... I think) Attached is (b). I have all kinds of horrid custom RI triggers. I can send the whole DB schema if necessary. -Cedar CREATE TABLE "tblstsc1" ( "surid" character varying(50) NOT NULL, "sc1strictnessid" int4 NOT NULL, "sc1caption" text, "defaultoptionid" int4 NOT NULL, CONSTRAINT "tblstsc1_sc1caption" CHECK ((sc1caption <> ''::text)), PRIMARY KEY ("surid") ); CREATE TRIGGER "tblstsc1_before" BEFORE INSERT OR DELETE OR UPDATE ON "tblstsc1" FOR EACH ROW EXECUTE PROCEDURE "tblstsc1_before" (); CREATE CONSTRAINT TRIGGER "" AFTER INSERT OR UPDATE ON "tblstsc1" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_check_ins" ('', 'tblstsc1', 'tblidstrictness', 'UNSPECIFIED', 'sc1strictnessid', 'strictnessid'); CREATE TABLE "tblstsc1options" ( "sc1optionid" int4 NOT NULL, "sc1optionname" character varying(50) NOT NULL, "sc1optionvalue" float4, "surid" character varying(50) NOT NULL, CONSTRAINT "tblstsc1options_sc1optionvalue" CHECK ((sc1optionvalue > 0)), CONSTRAINT "tblstsc1options_sc1optionname" CHECK ((sc1optionname <> ''::"varchar")), PRIMARY KEY ("sc1optionid") ); CREATE INDEX "ix_stsc1options_surid" on "tblstsc1options" using btree ( "surid" "varchar_ops" ); CREATE TRIGGER "tblstsc1options_before" BEFORE INSERT OR DELETE OR UPDATE ON "tblstsc1options" FOR EACH ROW EXECUTE PROCEDURE "tblstsc1options_before" (); ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] max( bool )?
Cedar Cox <[EMAIL PROTECTED]> writes: > And I get this notice: >> > NOTICE: PortalHeapMemoryFree: 0x0x824a6e8 not in alloc set! > NOTICE: Buffer Leak: [059] (freeNext=54, freePrev=58, relname=tblstsc1, > blockNum=0, flags=0x14, refcount=-4 -1) >> > Ver 7.0.2 (sorry, I meant to include this... I think) Okay, this is a known bug; it's fixed in 7.0.3. I'd advise staying away from that sort of construct (sub-select in the output list of a SELECT DISTINCT) until you update --- in some cases the consequences might be worse than just an annoying NOTICE :-(. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] UNION in a VIEW?
"Gordon A. Runkle" <[EMAIL PROTECTED]> writes: > I have a number of views that I'm bringing over from DB2 which > have UNIONs in them. Some of the UNIONs have joins. > The views are not working as expected (I'm running 7.1RC2). > It appears that the where clause being applied to the view > by the user is not being distributed properly to the selects. You're correct, an outer WHERE clause will not be pushed down into the member selects of a UNION. (This hasn't really got anything to do with whether a VIEW is involved.) I haven't gotten round to convincing myself about whether that transformation is always valid, or what conditions it needs to be valid. A TODO item for some future release... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Memory and performance
Em 05 Apr 2001, Cedar Cox escreveu: >To this I say, remember that you are using a database! I would split this >into 3 tables (people, exams, answers). Then only the 'answers' table >would contain 3M records. Should be a bit faster. You don't want to have >to store the and with each . > >(If this is what you are doing then ignore me. I don't take you for an >idiot :) Yeah... I'm doing this... (I'm not a idiot! : I'm just asking if exist any way to optimize PostgreSQL to handle a great quantity of tuples. (I will try the 7.1. By the way... in the URI http://www.postgresql.org/ftpsite/dev/ we found a RC2 version, but in the URI ftp://ftp.postgresql.org/pub/dev/ don't!) Hugs, Edipo Elder [[EMAIL PROTECTED]] _ Oi! Você quer um iG-mail gratuito? Então clique aqui: http://www.ig.com.br/paginas/assineigmail.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [SQL] Need to do an ALTER TABLE.
Tom, That does bring up a related question: when are we gonna get DROP COLUMN capability? Currently my tables are littered with unused columns because I can't remove them without blowing my referential integrity and views to heck. -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [7.0.3] optimizing a LIKE query ...
The Hermit Hacker <[EMAIL PROTECTED]> writes: > I'm figuring that if I can somehow get the query (using subselects, > maybe?), to have the LIKE part of the query work only on the 6k records > returned by the "=" part of it, the overall results should be faster ... In 7.0.* I think the only way to do that is to select into a temp table and then apply the LIKE while selecting from the temp table. In 7.1 you could possibly force the order by using a subselect (although offhand I think the planner might be smart enough to see through that, and do what it thinks is right anyway). The real problem is the planner thinks that LIKE '%http://www.postgresql.org/%%' is really selective; it has no idea that most of your table mentions pgsql.org URLs :-(. We need better statistics to fix this properly. (On my list for 7.2.) regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] UNION in a VIEW?
Tom, Gordon, > You're correct, an outer WHERE clause will not be pushed down into > the member selects of a UNION. (This hasn't really got anything > to do with whether a VIEW is involved.) I haven't gotten round to > convincing myself about whether that transformation is always valid, > or what conditions it needs to be valid. A TODO item for some > future release... Hmmm ... I'm pretty used an external WHERE clause being applied to the output of the view, rather than pushed down into the member selects of the UNION, in the same way as if the UNION query were a subselect. Coming from a SQL Server background, I'd actually find the suggested behavior rather confusing (as well as tough for you guys to implement). -Josh __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Re: UNION in a VIEW?
In article <[EMAIL PROTECTED]>, "Tom Lane" <[EMAIL PROTECTED]> wrote: > You're correct, an outer WHERE clause will not be pushed down into the > member selects of a UNION. (This hasn't really got anything to do with > whether a VIEW is involved.) I haven't gotten round to convincing > myself about whether that transformation is always valid, or what > conditions it needs to be valid. A TODO item for some future release... Thanks, Tom. Not the answer I was hoping for... This seems to really reduce the utility of UNIONs, especially within VIEWS. What DB2 and SQL Server appear to be doing is binding the WHERE clause to the colums in the positions corresponding to the column labels in the first select. So long as that is communicated to the user, I would think that it would be valid. At the very least, it would be consistent with the other major RDBMSs. I suppose it's to late to put this in for 7.1... ;-) Gordon. -- It doesn't get any easier, you just go faster. -- Greg LeMond ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
RE: [SQL] RE: serial type; race conditions
It seems to just feel like conflicting requirements, so it's a tug-of-war. I've always done it by doing all the processing I can and then, from inside a transaction, do update seed from seed_table set seed=seed+1 where id='abc'; insert into some_table values ((select seed from seed_table where id='abc'), other_stuff); The processing would be concurrent and only the update & insert would be "serialized". It would be portable and shouldn't contain holes, but is slower than sequences. Gerald. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Jeff Eckermann Sent: Thursday, March 29, 2001 10:48 AM To: 'Andrew Perrin' Cc: PgSQL-SQL Subject: [SQL] RE: serial type; race conditions Probably just me: but I don't see the point. Consider: * User 1 commences insert transaction: grabs nextval(sequence), max(foo) * User 2 commences insert transaction: grabs nextval(sequence), max(foo) * User 1 commits * User 2 commits (insert has sequence value one higher than for User 1, but same value for max(foo) + 1), or * If foo has a unique constraint, transaction 2 will roll back. Either way, I don't see what has been gained. All of the messages I have read on this subject conclude with the same point: choice is to: * accept unique sequence with holes * accept loss of concurrency (as in the example above). Or am I just missing the point? > -Original Message- > From: Andrew Perrin [SMTP:[EMAIL PROTECTED]] > Sent: Thursday, March 29, 2001 8:46 AM > To: [EMAIL PROTECTED] > Cc: PgSQL-SQL > Subject: Re: serial type; race conditions > > I ditto what Bruce said - trying to get a true sequence without gaps is a > losing battle. Why don't you, instead, use a serial column as the real > sequence, and then a trigger that simply inserts max(foo) + 1 in a > different column? Then when you need to know the column, do something > like: > > SELECT number_i_care_about FROM table WHERE serial_number = > currval('serial_number_seq'); > > ap > > -- > Andrew J Perrin - Ph.D. Candidate, UC Berkeley, Dept. of Sociology > (Soon: Asst Professor of Sociology, U of North Carolina, Chapel Hill) > [EMAIL PROTECTED] - http://www.unc.edu/~aperrin > > On Thu, 29 Mar 2001, Bruce Momjian wrote: > > > > How does currval work if you are not inside a transaction. I have > > > been experimenting with inserting into a table that has a sequence. > > > If the insert fails (not using a transaction) because of bad client > input > > > then the next insert gets the proper next number in the sequence. > > > > If you are in a transaction, and the INSERT succeeds but the transaction > > rolls back, the sequence does not get reused. Each backend has a local > > variable that holds the most recent sequence assigned. That is how > > currval works. > > > > > > > > given sequence 1,2,3,4,5 exists > > > insert into table date 1/111/01 (obviously wrong) insert fails... > > > try again with good data, insert succeeds and gets number 6 in the > > > sequence. > > > > > > i'm getting what I want. A sequence number that does not increment > > > on a failed insert. However, how do I get the assigned sequence > > > number with currval when I am not using a transaction? What > > > happens when multiple users are inserting at the same time? > > > > > > I am trying to create a sequence with out any "missing" numbers. If > > > there is a failure to insert, and a sequence number is "taken". I want > > > > the empty row. > > > > > > Thanks, it is getting clearer > > > > You really can't use sequences with no gaps. Sequence numbers are not > > _held_ until commit because it would block other backends trying to get > > sequence numbers. > > > > -- > > Bruce Momjian| http://candle.pha.pa.us > > [EMAIL PROTECTED] | (610) 853-3000 > > + If your life is a hard drive, | 830 Blythe Avenue > > + Christ can be your backup.| Drexel Hill, Pennsylvania > 19026 > > > > ---(end of broadcast)--- > > TIP 2: you can get off all lists at once with the unregister command > > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Memory and performance
[EMAIL PROTECTED] writes: > I have noted that Postgresql don't make a good memory handle. I have > made the tables/procedure (in attached file) and run it as "select bench(10, > 5000)". This will give a 5 records inserts (5 x 1). (well, I run it > on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull > machine, you can try other values). > I get as result, the following times: > id | objname | benchtime > +-+--- > 1 | group 1 | 00:00:32 > 2 | group 2 | 00:00:47 > 3 | group 3 | 00:01:13 > 4 | group 4 | 00:01:41 > 5 | group 5 | 00:02:08 > (5 rows) This is an inefficiency in handling of foreign-key triggers. It's fixed for 7.1 --- in current sources I get id | objname | benchtime +--+--- 1 | group 1 | 00:00:03 2 | group 2 | 00:00:03 3 | group 3 | 00:00:03 4 | group 4 | 00:00:03 5 | group 5 | 00:00:03 6 | group 6 | 00:00:03 7 | group 7 | 00:00:03 8 | group 8 | 00:00:03 9 | group 9 | 00:00:03 10 | group 10 | 00:00:03 (10 rows) regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] [Tip] Using list-aggregates for faster batching
I've written a HOWTO on how to create new aggregate functions to create list (comma lists, HTML lists, etc.). It explains the purpose of these, and gives an example of how to create one in pgplsql. The HOWTO is written for the Zope site, but it's not really Zope- or Python- specific. http://www.zope.org/Members/pupq/pg_in_aggregates Hoping someone finds it useful. Cheers, -- Joel Burton <[EMAIL PROTECTED]> Director of Information Systems, Support Center of Washington ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Index on View ?
Is it possible (feasible) to create an index on a view. We have a large table and a defined sub-set (view) from this table, would it be possible to keep an index of the sub-set. Keith ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Memory and performance
[EMAIL PROTECTED] wrote: > > Hi all, > > I have noted that Postgresql don't make a good memory handle. I have > made the tables/procedure (in attached file) and run it as "select bench(10, > 5000)". This will give a 5 records inserts (5 x 1). (well, I run it > on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull > machine, you can try other values). That's 50,000 inserts in one transaction - have you tried 50 transactions of 1000 inserts? > I get as result, the following times: > 5 | group 5 | 00:02:08 > > Note that, with memory increse, the system becomes slow, even if the > system has free memory to alocate (yes, 64MB is enough to this test). I > didn't see the source code (yet), but I think that the data estructure used > to keep the changed records is a kind of chained list; and to insert a new > item, you have to walk to the end of this list. Can it be otimized? I don't fancy your chances before 7.1 ;-) > The system that I'm developing, I have about 25000 (persons) x 8 (exams) > x 15 (answers per exam) = 300 records to process and it is VERY SLOW. If you need to import large quantities of data, look at the copy command, that tends to be faster. - Richard Huxton ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[SQL] performance inconsistency
Good day, We're running a week-old CVS snapshot of PostgreSQL 7.1, and I'm not sure if this performance inconsistency is specific to it, or if this is just something in PostgreSQL in general, but it seems kind of odd, and I could use some help here. ;) I have run two queries in a table full of invoices where the only difference between the queries is the where clause: SELECT COUNT(DISTINCT l."ORDER_NUM") AS line_orders, SUM(l."GROSS_PROFIT" + "REPLACEMENT_COST") AS grand_total, SUM(l."LIST_PRICE" * l."SHIPPED_QUANTITY") AS grand_total_alternative, SUM(l."GROSS_PROFIT") AS grand_total_profit FROM cu_invoice_li1_tbl l INNER JOIN cu_invoice_03_tbl h ON l."ORDER_NUM"=h."ORDER_NUM" WHERE substr("ORDER_NUM", 1, 1) != 'W'; # (Orders NOT prefixed with W.) OR... WHERE substr("ORDER_NUM", 1, 1) = 'I'; # (Orders prefixed with I.) The first query returns more rows than the second (because it includes orders prefixed with I, C, F, etc), whereas the second only returns orders prefixed with I. It seems that the logic of identifying a single character as *not* being a single value would be identical or at least similar in efficiency to the logic of a single character *being* being a single value. However, the first query runs in about 10-15 seconds, and the second query ran for over 40 minutes before I cancelled it. Now, I can easily use a series of "!=" statements to get it down so that it's only the 'I' orders (which runs even faster, at about 5-6 seconds!), but can anyone explain this to me? :) Are "!=" substring evaluations inherently faster by an obscene order of magnitude, or is something really wrong here? J. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] performance inconsistency
Phuong Ma <[EMAIL PROTECTED]> writes: > WHERE substr("ORDER_NUM", 1, 1) != 'W'; # (Orders NOT prefixed with W.) > OR... > WHERE substr("ORDER_NUM", 1, 1) = 'I'; # (Orders prefixed with I.) > However, the first query runs in about 10-15 seconds, and the second > query > ran for over 40 minutes before I cancelled it. What query plans does EXPLAIN show for each case? (And have you done VACUUM ANALYZE lately?) The change in the WHERE clause may be altering the planner's selectivity guesstimates enough to cause choice of a less appropriate plan. Hard to tell without seeing what EXPLAIN has to say, though. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]