Re: [SQL] Sorry, to many clients already
> > When I'm trying to connect I have this error message: > > Something unusual has occured to cause the driver to > fail.Please report this > exception: java.sql.SQLException: Sorry, to many clients already. > I also met this error yesterday. Default PostgreSQL limit for incoming connections is 32 (actually 30 for ordinary users, because 2 are reserved for superusers). You can change this from postgresql.conf. In our case the problem was pgAdmin, which creates a new connection for every database you click on. You can use `ps ax | grep postgres` in Linux to see active connections. Tambet ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] SQL list table names
is it possible to execute an sql query to be able to list the tables's names? well, you can do it on psql using \dt. but im talking about the SQL statement, because i want to execute that query from a script. thanks. alviN ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] SQL list table names
If you start psql like so psql -E Then all the SQL behind the \d type commands is displayed for you. adam > is it possible to execute an sql query to be able to list the tables's > names? > well, you can do it on psql using \dt. but im talking about the SQL > statement, because i want to execute that query from a script. > > thanks. > > > > alviN > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] 7.3.1 index use / performance
Hi, i am just in the stage of having migrated my test system to 7.3.1 and i am experiencing some performance problems. i have a table "noon" Table "public.noon" Column | Type | Modifiers ++--- v_code | character varying(4) | log_no | bigint | report_date| date | report_time| time without time zone | voyage_no | integer| charterer | character varying(12) | port | character varying(24) | duration | character varying(4) | rotation | character varying(9) | .. with a total of 278 columns. it has indexes: Indexes: noonf_date btree (report_date), noonf_logno btree (log_no), noonf_rotation btree (rotation text_ops), noonf_vcode btree (v_code), noonf_voyageno btree (voyage_no) On the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz 400Mb, with 168Mb for pgsql), i get: dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where v_code='4500' and rotation='NOON ' and report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN --- Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39) (actual time=0.27..52.89 rows=259 loops=1) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON '::character varying)) Total runtime: 53.98 msec (4 rows) after i drop the noonf_date index i actually get better performance cause the backend uses now the more appropriate index noonf_vcode : dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where v_code='4500' and rotation='NOON ' and report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN - Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1 width=39) (actual time=0.16..13.92 rows=259 loops=1) Index Cond: (v_code = '4500'::character varying) Filter: ((rotation = 'NOON '::character varying) AND (report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Total runtime: 14.98 msec (4 rows) On the pgsql 7.2.3 development system (a RH linux 2.4.7, PIII 1 GHz, 1Mb, with 168M for pgsql), i always get the right index use: dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where v_code='4500' and rotation='NOON ' and report_date between '2002-01-07' and '2003-01-07'; NOTICE: QUERY PLAN: Index Scan using noonf_vcode on noon (cost=0.00..3046.38 rows=39 width=39) (actual time=0.09..8.55 rows=259 loops=1) Total runtime: 8.86 msec EXPLAIN Is something i am missing?? Is this reasonable behaviour?? P.S. Yes i have vaccumed analyzed both systems before the queries were issued. == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Inherancing
Hi people, supposing i have the tables : create table cities (id int, name varchar, primary key(id) ); create table centers (state varchar(2)) inherits (cities); ant the tuple insert into cities values (1, 'Lajeado'); How i do if i want to make this city as a center ? Thanks, xx===xx || °v° Nasair Junior da Silva || || /(_)\ Linux User: 246054 || || ^ ^ [EMAIL PROTECTED]|| ||CPD - Desenvolvimento || ||Univates - Centro Universitário|| xx===xx ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Inherancing
On 7/1/03 11:42 am, "Nasair Junior da Silva" <[EMAIL PROTECTED]> wrote: > Hi people, > supposing i have the tables : > > create table cities (id int, name varchar, primary key(id) ); > create table centers (state varchar(2)) inherits (cities); > > ant the tuple > insert into cities values (1, 'Lajeado'); > > How i do if i want to make this city as a center ? If I understand you correctly you probably want to do this instead... insert into centers values (1, 'Lajeado', 'AZ'); Where AZ is your state HTH adam -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Inherancing
In this case, when i do select * from cities; i'll have two tuples with the name 'Lajeado'. Does it the only way i have ? Thanks again, Nasair Júnior da Silva Lajeado - RS - Brasil. Em Tue, 07 Jan 2003 12:02:13 +, Adam Witney <[EMAIL PROTECTED]> escreveu: >On 7/1/03 11:42 am, "Nasair Junior da Silva" <[EMAIL PROTECTED]> wrote: > >> Hi people, >> supposing i have the tables : >> >> create table cities (id int, name varchar, primary key(id) ); >> create table centers (state varchar(2)) inherits (cities); >> >> ant the tuple >> insert into cities values (1, 'Lajeado'); >> >> How i do if i want to make this city as a center ? > >If I understand you correctly you probably want to do this instead... > >insert into centers values (1, 'Lajeado', 'AZ'); > >Where AZ is your state > >HTH > >adam > > xx===xx || °v° Nasair Junior da Silva || || /(_)\ Linux User: 246054 || || ^ ^ [EMAIL PROTECTED]|| ||CPD - Desenvolvimento || ||Univates - Centro Universitário|| xx===xx ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Tomasz Myrta wrote: > Maybe it is not an answer to your question, but why don't you help > Postgres by yourself? Thanx, i dont think that the issue here is to help postgresql by myself. I can always stick to 7.2.3, or use indexes that 7.3.1 will acknowledge, like noonf_vcode_date on noon (v_code,report_date). (unfortunately when i create the above noonf_vcode_date index, it is only used until the next vacuum analyze, hackers is this an issue too???), but these options are not interesting from a postgresql perspective :) > For this kind of queries it's better to drop index on report_date - your > report period is one year and answer to this condition is 10% records (I > suppose) I cannot drop the index on the report_date since a lot of other queries need it. > It would be better to change 2 indexes on v_code and rotation into one > index based on both fields. > What kind of queries do you have? How many records returns each "where" > condition? Use indexes on fields, on which condition result in smallest > amount of rows. > > Regards, > Tomasz Myrta > == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Inherancing
Yes, if you do insert into cities values (1, 'Lajeado'); insert into centers values (1, 'Lajeado', 'AZ'); Then select * from cities will give you 1 Lajedo 1 Lajedo And select * from centers will give 1 Lajedo AZ However, if rather than duplicating the insert and just doing insert into centers values (1, 'Lajeado', 'AZ'); Then select * from cities will give you 1 Lajedo And select * from centers will give 1 Lajedo AZ Is this what you want? adam > In this case, when i do > select * from cities; > i'll have two tuples with the name 'Lajeado'. > > Does it the only way i have ? > > Thanks again, > > Nasair Júnior da Silva > Lajeado - RS - Brasil. > > Em Tue, 07 Jan 2003 12:02:13 +, Adam Witney <[EMAIL PROTECTED]> > escreveu: >> On 7/1/03 11:42 am, "Nasair Junior da Silva" <[EMAIL PROTECTED]> wrote: >> >>> Hi people, >>> supposing i have the tables : >>> >>> create table cities (id int, name varchar, primary key(id) ); >>> create table centers (state varchar(2)) inherits (cities); >>> >>> ant the tuple >>> insert into cities values (1, 'Lajeado'); >>> >>> How i do if i want to make this city as a center ? >> >> If I understand you correctly you probably want to do this instead... >> >> insert into centers values (1, 'Lajeado', 'AZ'); >> >> Where AZ is your state >> >> HTH >> >> adam >> >> > > > xx===xx > || °v° Nasair Junior da Silva || > || /(_)\ Linux User: 246054 || > || ^ ^ [EMAIL PROTECTED]|| > ||CPD - Desenvolvimento || > ||Univates - Centro Universitário|| > xx===xx > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] A problem about alter table
Hi, all I'm using postgreSQL 7.2.3. The following statement always cuases a parser error, "parse error at or near NOT". Please adivse, thank you in advance. ALTER TABLE _acct_group1 ALTER groupkey SET NOT NULL; Jack ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] A problem about alter table
jack wrote: Hi, all I'm using postgreSQL 7.2.3. The following statement always cuases a parser error, "parse error at or near NOT". Please adivse, thank you in advance. ALTER TABLE _acct_group1 ALTER groupkey SET NOT NULL; Jack From Postgres 7.2 documentation: "In the current implementation of ADD COLUMN, default and NOT NULL clauses for the new column are not supported. You can use the SET DEFAULT form of ALTER TABLE to set the default later. (You may also want to update the already existing rows to the new default value, using UPDATE.)" There is nothing about setting not null fields. I think you have to create trigger instead of altering table. Regards, Tomasz Myrta ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] A problem about alter table
But on postgreSQL 7.2 reference manual, there is a statement for alter table such as, ALTER TABLE [ ONLY ] table [ * ] ALTER [ COLUMN ] column { SET | DROP } NOT NULL Do you mean this one hasn't been implemented? Jack - Original Message - From: "Tomasz Myrta" <[EMAIL PROTECTED]> To: "jack" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Tuesday, January 07, 2003 11:31 PM Subject: Re: [SQL] A problem about alter table > jack wrote: > > > Hi, all > > I'm using postgreSQL 7.2.3. The following statement always cuases a parser > > error, "parse error at or near NOT". Please adivse, thank you in advance. > > > > ALTER TABLE _acct_group1 > > ALTER groupkey SET NOT NULL; > > > > Jack > > > From Postgres 7.2 documentation: > > "In the current implementation of ADD COLUMN, default and NOT NULL > clauses for the new column are not supported. You can use the SET > DEFAULT form of ALTER TABLE to set the default later. (You may also want > to update the already existing rows to the new default value, using > UPDATE.)" > > There is nothing about setting not null fields. > I think you have to create trigger instead of altering table. > > Regards, > Tomasz Myrta > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] A problem about alter table
jack wrote: But on postgreSQL 7.2 reference manual, there is a statement for alter table such as, ALTER TABLE [ ONLY ] table [ * ] ALTER [ COLUMN ] column { SET | DROP } NOT NULL Do you mean this one hasn't been implemented? Jack Strange. I don't have such ALTER TABLE in my 7.2 documentation. There is only ALTER... SET | DROP DEFAULT. Tomasz ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] A problem about alter table
On Wed, 8 Jan 2003, jack wrote: > But on postgreSQL 7.2 reference manual, there is a statement for alter table > such as, ALTER TABLE [ ONLY ] table [ * ] > ALTER [ COLUMN ] column { SET | DROP } NOT NULL > > Do you mean this one hasn't been implemented? > > Jack This syntax is valid in 7.3 (not 7.2) > > - Original Message - > From: "Tomasz Myrta" <[EMAIL PROTECTED]> > To: "jack" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Tuesday, January 07, 2003 11:31 PM > Subject: Re: [SQL] A problem about alter table > > > > jack wrote: > > > > > Hi, all > > > I'm using postgreSQL 7.2.3. The following statement always cuases a > parser > > > error, "parse error at or near NOT". Please adivse, thank you in > advance. > > > > > > ALTER TABLE _acct_group1 > > > ALTER groupkey SET NOT NULL; > > > > > > Jack > > > > > From Postgres 7.2 documentation: > > > > "In the current implementation of ADD COLUMN, default and NOT NULL > > clauses for the new column are not supported. You can use the SET > > DEFAULT form of ALTER TABLE to set the default later. (You may also want > > to update the already existing rows to the new default value, using > > UPDATE.)" > > > > There is nothing about setting not null fields. > > I think you have to create trigger instead of altering table. > > > > Regards, > > Tomasz Myrta > > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] insert rule doesn't see id field
Two seperate problems, really, but first the SQL: CREATE SEQUENCE person_id_seq; CREATE TABLE person ( name_last VARCHAR( 50 ) NOT NULL, name_first VARCHAR( 50 ) NOT NULL, id INTEGER DEFAULT nextval( 'person_id_seq' ) PRIMARY KEY ); CREATE INDEX person_name_idx ON person ( name_last, name_first ); CREATE TRIGGER person_id_noup BEFORE UPDATE ON person FOR EACH ROW EXECUTE PROCEDURE noup( 'id' ); CREATE RULE person_insert AS ON INSERT TO person DO INSERT INTO person_log ( name_last, name_first, mod_type, person_id ) VALUES ( new.name_last, new.name_first, 'I', new.id ); (Problem 1) My insert rule creates a record in person_log just fine. It inserts values for all of the fields except person_id. Why doesn't new.id contain a value? Corresponding update and delete rules work as expected. (Problem 2) I thought that the idea behind noup was to protect single columns from update. However, when I apply the noup trigger as above, I can't update /any/ column. Is this the intended behaviour? e.g. directory=# select * from person; name_last | name_first | id ---++ Peterson | Ronald | 1 Humbert | Humbert| 2 (2 rows) directory=# update person set name_first='Ron' where name_first='Ronald'; NOTICE: id: update not allowed UPDATE 0 -- Ron Peterson -o) Network & Systems Manager /\\ Mount Holyoke College_\_v http://www.mtholyoke.edu/~rpeterso ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] insert rule doesn't see id field
BTW, PostgreSQL 7.2.1-2woody2 on Debian. -- Ron Peterson -o) Network & Systems Manager /\\ Mount Holyoke College_\_v http://www.mtholyoke.edu/~rpeterso ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] A problem about alter table
--- jack <[EMAIL PROTECTED]> wrote: > Hi, all > I'm using postgreSQL 7.2.3. The following statement > always cuases a parser > error, "parse error at or near NOT". Please adivse, > thank you in advance. > > ALTER TABLE _acct_group1 > ALTER groupkey SET NOT NULL; I believe you will need to use: ALTER TABLE _acct_group1 ADD CONSTRAINT CHECK groupkey NOT NULL; I haven't checked that syntax for correctness, but it's covered in the docs on ALTER TABLE. __ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] [PERFORM] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Achilleus Mantzios wrote: > i am just in the stage of having migrated my test system to 7.3.1 > and i am experiencing some performance problems. > > i have a table "noon" > Table "public.noon" > Column | Type | Modifiers > ++--- > v_code | character varying(4) | > log_no | bigint | > report_date| date | > report_time| time without time zone | > voyage_no | integer| > charterer | character varying(12) | > port | character varying(24) | > duration | character varying(4) | > rotation | character varying(9) | > .. > > with a total of 278 columns. > > it has indexes: > Indexes: noonf_date btree (report_date), > noonf_logno btree (log_no), > noonf_rotation btree (rotation text_ops), > noonf_vcode btree (v_code), > noonf_voyageno btree (voyage_no) > > On the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz > 400Mb, with 168Mb for pgsql), > i get: > dynacom=# EXPLAIN ANALYZE select > FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where > v_code='4500' and rotation='NOON ' and report_date between > '2002-01-07' and '2003-01-07'; > QUERY PLAN > > >--- > Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39) > (actual time=0.27..52.89 rows=259 loops=1) > Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1 > width=39) (actual time=0.16..13.92 rows=259 loops=1) What do the statistics for the three columns actually look like and what are the real distributions and counts like? Given an estimated cost of around 4 for the first scan, my guess would be that it's not expecting alot of rows between 2002-01-07 and 2003-01-07 which would make that a reasonable plan. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] insert rule doesn't see id field
Ron Peterson <[EMAIL PROTECTED]> writes: > CREATE RULE person_insert AS > ON INSERT TO person > DO > INSERT INTO person_log ( name_last, name_first, mod_type, person_id ) > VALUES ( new.name_last, new.name_first, 'I', new.id ); > [where id is a serial column] > My insert rule creates a record in person_log just fine. It inserts > values for all of the fields except person_id. Why doesn't new.id > contain a value? This is a bug in 7.2.*. It's fixed in 7.3. However, your rule will still not work the way you would like, because rules are macros: the default expression for id will get evaluated once in the rule and once in your original query, leading to two different sequence numbers getting inserted. The only way to make this example work is to issue the log insertion from a trigger, not a rule. > (Problem 2) > I thought that the idea behind noup was to protect single columns from > update. However, when I apply the noup trigger as above, I can't > update /any/ column. Is this the intended behaviour? Idly looking at the source code for contrib/noupdate/noup.c, I don't believe that it has ever worked as advertised: it seems to reject any non-null value for the target column, independently of whether the value is the same as before (which is what I'd have thought it should do). Is anyone interested in fixing it? Or should we just remove it? If it's been there since 6.4 and you're the first person to try to use it, as seems to be the case, then I'd have to say that it's a waste of space in the distribution. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] weighting (the results of) a query ?
Thanks to Len Morgan for the hints to get to this: SELECT *, '4' as result FROM images WHERE iptc_release_date < 'today' AND iptc_priority BETWEEN 1 AND 5 AND iptc_caption ~* 'term' UNION SELECT *, '3' as result FROM images WHERE iptc_release_date < 'today' AND iptc_priority BETWEEN 1 AND 5 AND iptc_keywords ~* 'term' UNION SELECT *, '2' as result FROM images WHERE iptc_release_date < 'today' AND iptc_priority BETWEEN 1 AND 5 AND iptc_category ~* 'term' UNION SELECT *, '1' as result FROM images WHERE iptc_release_date < 'today' AND iptc_priority BETWEEN 1 AND 5 AND iptc_subcategory ~* 'term' ORDER BY result desc, iptc_priority asc, shot_time, image_serial asc; Using a constant and UNION made it work OK. Not sure yet on real world performance, but that's what tuning is for :) Hope someone finds this in the archive and finds it useful. Peter - Original Message - From: "Peter Galbavy" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, January 05, 2003 10:41 AM Subject: [SQL] weighting (the results of) a query ? > I have a table with a primary key ('md5') and a bunch of text fields. > There is one row per 'photograph' and the number of rows is about 1100 > now but will rise to over 20,000 in a few months - assuming I get time > to import all my stuff. > > I would like to offer users on my web site a free text search on these > text fields, but I would like to weight the results base on which field > the text came from. > > Let's say those fields are (for simplicity) 'category', 'subcategory', > 'caption' and 'keywords'. > > I want to do: > > SELECT md5, weighting() FROM images WHERE > category ~* 'term' OR subcategory ~* 'term' OR ... > > Is there anything I can do - including writing functions - to return a > number that is somehow representative of which WHERE clause matched > 'first' and even better the more columns matched ? > > I am guessing that like 'C' an 'OR' conditional stops at the first match > and does not process further conditions after a previous one has > matched - that's good enough for me for day one... > > It is not critial that I get a value out, the return order of results > could be fine too. > > I would like to minimise the number of queries to the DB, but I can fall > back on doing one query per column and combining the results in perl. > This is my approach for an initial implementation later today unless > anyone can suggest otherwise... > > Any pointers, tips, code, suggestions greatly appreciated. > > Happy New Year all, BTW > -- > Peter > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] weighting (the results of) a query ?
"Peter Galbavy" <[EMAIL PROTECTED]> writes: > Thanks to Len Morgan for the hints to get to this: > SELECT *, '4' as result FROM images > WHERE > iptc_release_date < 'today' AND > iptc_priority BETWEEN 1 AND 5 AND iptc_caption ~* 'term' > UNION > SELECT *, '3' as result FROM images > WHERE > iptc_release_date < 'today' AND > iptc_priority BETWEEN 1 AND 5 AND iptc_keywords ~* 'term' > UNION > SELECT *, '2' as result FROM images > WHERE > iptc_release_date < 'today' AND > iptc_priority BETWEEN 1 AND 5 AND iptc_category ~* 'term' > UNION > SELECT *, '1' as result FROM images > WHERE > iptc_release_date < 'today' AND > iptc_priority BETWEEN 1 AND 5 AND iptc_subcategory ~* 'term' > ORDER BY result desc, iptc_priority asc, shot_time, image_serial asc; > Using a constant and UNION made it work OK. Not sure yet on real world > performance, but that's what tuning is for :) Most likely you should write UNION ALL, not UNION. As given, the query will go through a pass of attempted duplicate-row-elimination, which is almost certainly not what you want. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] [PERFORM] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Stephan Szabo wrote: > > On Tue, 7 Jan 2003, Achilleus Mantzios wrote: > > > i am just in the stage of having migrated my test system to 7.3.1 > > and i am experiencing some performance problems. > > > > i have a table "noon" > > Table "public.noon" > > Column | Type | Modifiers > > ++--- > > v_code | character varying(4) | > > log_no | bigint | > > report_date| date | > > report_time| time without time zone | > > voyage_no | integer| > > charterer | character varying(12) | > > port | character varying(24) | > > duration | character varying(4) | > > rotation | character varying(9) | > > .. > > > > with a total of 278 columns. > > > > it has indexes: > > Indexes: noonf_date btree (report_date), > > noonf_logno btree (log_no), > > noonf_rotation btree (rotation text_ops), > > noonf_vcode btree (v_code), > > noonf_voyageno btree (voyage_no) > > > > On the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz > > 400Mb, with 168Mb for pgsql), > > i get: > > dynacom=# EXPLAIN ANALYZE select > > FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where > > v_code='4500' and rotation='NOON ' and report_date between > > '2002-01-07' and '2003-01-07'; > > QUERY PLAN > > > > > >--- > > Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39) > > (actual time=0.27..52.89 rows=259 loops=1) > > > > Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1 > > width=39) (actual time=0.16..13.92 rows=259 loops=1) > > > What do the statistics for the three columns actually look like and what > are the real distributions and counts like? The two databases (test 7.3.1 and development 7.2.3) are identical (loaded from the same pg_dump). About the stats on these 3 columns i get: (see also attachment 1 to avoid identation/wraparound problems) schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation +---+-+---+---++-++-+- public | noon | v_code | 0 | 8 |109 | {4630,4650,4690,4670,4520,4610,4550,4560,4620,4770} | {0.028,0.028,0.0256667,0.024,0.024,0.0236667,0.023,0.023,0.0226667,0.0226667} | {2070,3210,4330,4480,4570,4680,4751,4820,4870,4940,6020} | -0.249905 public | noon | report_date | 0 | 4 | 3408 | {2001-11-14,1998-10-18,2000-04-03,2000-07-04,2000-12-20,2000-12-31,2001-01-12,2001-10-08,2001-12-25,1996-01-23} | {0.002,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017,0.0013} | {"0001-12-11 BC",1994-09-27,1996-03-26,1997-07-29,1998-08-26,1999-03-29,1999-11-30,2000-09-25,2001-05-25,2002-01-17,2002-12-31} | -0.812295 public | noon | rotation| 0 |13 | 6 | {"NOON ","PORT LOG ","ARRIVAL ",DEPARTURE,"SEA ","NEXT PORT"} | {0.460333,0.268667,0.139,0.119667,0.007,0.0053} | | 0.119698 (3 rows) About distributions, i have: dynacom=# SELECT rotation,count(*) from noon group by rotation; rotation | count ---+--- | 2 0 |65 ARRIVAL | 1 ARRIVAL | 15471 DEPARTURE | 15030 NEXT PORT | 462 NOON | 50874 PORT LOG | 25688 SEA | 1202 (9 rows) dynacom=# SELECT v_code,count(*) from noon group by v_code; v_code | count +--- 0004 | 1
Re: [SQL] weighting (the results of) a query ?
> Most likely you should write UNION ALL, not UNION. As given, the query > will go through a pass of attempted duplicate-row-elimination, which is > almost certainly not what you want. Not sure - what I want is only one row per real row but ordered as per the constants. When you say duplicate-row-elimination do you mean including the 'constant' weighting, in which case UNION ALL is probably right. At the moment, the dataset returned appears correctly de-duplicated. What I mean is when an image row has both 'eat static' in the caption and as a keyword, I want it returned only once... Peter ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] [PERFORM] 7.3.1 index use / performance
Achilleus Mantzios <[EMAIL PROTECTED]> writes: > About the stats on these 3 columns i get: Does 7.2 generate the same stats? (minus the schemaname of course) Also, I would like to see the results of these queries on both versions, so that we can see what the planner thinks the index selectivity is: EXPLAIN ANALYZE select * from noon where v_code='4500'; EXPLAIN ANALYZE select * from noon where report_date between '2002-01-07' and '2003-01-07'; regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] weighting (the results of) a query ?
"Peter Galbavy" <[EMAIL PROTECTED]> writes: > Not sure - what I want is only one row per real row but ordered as per the > constants. When you say duplicate-row-elimination do you mean including the > 'constant' weighting, Yes, UNION would consider all columns in deciding if two rows are dups. > What I mean is when an image row has both 'eat static' in the caption and as > a keyword, I want it returned only once... I think your query might fail on that requirement regardless, no? At least I missed how you'd prevent it. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] weighting (the results of) a query ?
> I think your query might fail on that requirement regardless, no? At > least I missed how you'd prevent it. I have had about 10 minutes to play with this - my day jobrequires I do real testing when I get home later tonight :) Thanks, and I will keep an eye out for this and figure a way around it. Petr ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [PERFORM] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Tom Lane wrote: > Achilleus Mantzios <[EMAIL PROTECTED]> writes: > > About the stats on these 3 columns i get: > > Does 7.2 generate the same stats? (minus the schemaname of course) Not absolutely but close: (See attachment) > > Also, I would like to see the results of these queries on both versions, > so that we can see what the planner thinks the index selectivity is: > > EXPLAIN ANALYZE select * from noon where > v_code='4500'; > > EXPLAIN ANALYZE select * from noon where > report_date between '2002-01-07' and '2003-01-07'; > On 7.3.1 (On a FreeBSD) === dynacom=# EXPLAIN ANALYZE select * from noon where v_code='4500'; QUERY PLAN - Index Scan using noonf_vcode on noon (cost=0.00..3066.64 rows=829 width=1974) (actual time=2.02..1421.14 rows=792 loops=1) Index Cond: (v_code = '4500'::character varying) Total runtime: 1424.82 msec (3 rows) dynacom=# EXPLAIN ANALYZE select * from noon where report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN - Index Scan using noonf_date on noon (cost=0.00..15919.50 rows=11139 width=1974) (actual time=2.05..13746.17 rows=7690 loops=1) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Total runtime: 13775.48 msec (3 rows) On 7.2.3 (Linux) == dynacom=# EXPLAIN ANALYZE select * from noon where v_code='4500'; NOTICE: QUERY PLAN: Index Scan using noonf_vcode on noon (cost=0.00..3043.45 rows=827 width=1974) (actual time=19.59..927.06 rows=792 loops=1) Total runtime: 928.86 msec dynacom=# EXPLAIN ANALYZE select * from noon where report_date between '2002-01-07' and '2003-01-07'; NOTICE: QUERY PLAN: Index Scan using noonf_date on noon (cost=0.00..16426.45 rows=11958 width=1974) (actual time=29.64..8854.05 rows=7690 loops=1) Total runtime: 8861.90 msec EXPLAIN > regards, tom lane > == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation ---+-+---+---++-+---+-+- noon | v_code | 0 | 8 |109 | {4550,4630,4650,4800,4520,4770,4690,4620,4610,4560} | {0.027,0.026,0.026,0.0256667,0.025,0.025,0.0246667,0.0226667,0.022,0.022} | {1030,3210,4360,4500,4570,4670,4740,4820,4870,4940,6020} | -0.260377 noon | report_date | 0 | 4 | 3402 | {1999-01-22,2000-12-26,1998-09-29,1998-10-11,1999-02-24,1999-05-19,1999-09-08,1999-09-13,2000-01-19,2000-02-03} | {0.002,0.002,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017,0.0017} | {"0001-11-07 BC",1994-10-22,1996-04-05,1997-06-19,1998-07-31,1999-04-01,1999-12-15,2000-09-29,2001-05-31,2002-02-06,2003-01-02} | -0.821627 noon | rotation| 0 |13 | 6 | {"NOON ","PORT LOG ","ARRIVAL ",DEPARTURE,"SEA ","NEXT PORT"} | {0.478,0.253667,0.138333,0.121667,0.006,0.0023} | |0.147822 (3 rows) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [PERFORM] 7.3.1 index use / performance
Achilleus Mantzios <[EMAIL PROTECTED]> writes: >> Also, I would like to see the results of these queries on both versions, >> so that we can see what the planner thinks the index selectivity is: >> > [ data supplied ] There is something really, really bizarre going on there. You have dynacom=# EXPLAIN ANALYZE select * from noon where report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN - Index Scan using noonf_date on noon (cost=0.00..15919.50 rows=11139 width=1974) (actual time=2.05..13746.17 rows=7690 loops=1) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Total runtime: 13775.48 msec (3 rows) and from your earlier message dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where v_code='4500' and rotation='NOON ' and report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN --- Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39) (actual time=0.27..52.89 rows=259 loops=1) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON'::character varying)) Total runtime: 53.98 msec (4 rows) There is no way that adding the filter condition should have reduced the estimated runtime for this plan --- reducing the estimated number of output rows, yes, but not the runtime. And in fact I can't duplicate that when I try it here. I did this on 7.3.1: regression=# create table noon (v_code character varying(4) , regression(# report_date date , regression(# rotation character varying(9)); CREATE TABLE regression=# create index noonf_date on noon(report_date); CREATE INDEX regression=# EXPLAIN select * from noon where report_date between regression-# '2002-01-07' and '2003-01-07'; QUERY PLAN - Index Scan using noonf_date on noon (cost=0.00..17.08 rows=5 width=25) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) (2 rows) regression=# explain select * from noon where regression-# v_code='4500' and rotation='NOON ' and report_date between regression-# '2002-01-07' and '2003-01-07'; QUERY PLAN -- Index Scan using noonf_date on noon (cost=0.00..17.11 rows=1 width=25) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON '::character varying)) (3 rows) Note that the cost went up, not down. I am wondering about a compiler bug, or some other peculiarity on your platform. Can anyone else using FreeBSD try the above experiment and see if they get different results from mine on 7.3.* (or CVS tip)? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] 7.3.1 function problem: ERROR: cache lookup failed for type 0
Hi i had written a C function to easily convert an int4 to its equivalent 1x1 int4[] array. It worked fine under 7.1,7.2. Now under 7.3.1 i get the following message whenever i try to: dynacom=# select itoar(3126); ERROR: cache lookup failed for type 0 Surprisingly though when i do something like : dynacom=# select defid from machdefs where itoar(3126) ~ parents and level(parents) = 1 order by description,partno; defid --- 3137 3127 3130 3129 3133 3136 3135 3128 3131 3132 3134 3138 (12 rows) it works fine, but then again when i try to EXPLAIN the above (successful) statement i also get: dynacom=# EXPLAIN select defid from machdefs where itoar(3126) ~ parents and level(parents) = 1 order by description,partno; ERROR: cache lookup failed for type 0 Any clues of what could be wrong?? The definition of the function is: CREATE FUNCTION "itoar" (integer) RETURNS integer[] AS '$libdir/itoar', 'itoar' LANGUAGE 'c' WITH ( iscachable,isstrict ); I also tried without the iscachable option with no luck (since it seems to complain about *type* 0) == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [GENERAL] 7.3.1 function problem: ERROR: cache lookup failed for type 0
Achilleus Mantzios <[EMAIL PROTECTED]> writes: > Hi i had written a C function to easily convert an int4 to its > equivalent 1x1 int4[] array. Does your function know about filling in the elemtype field that was recently added to struct ArrayType? regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [GENERAL] 7.3.1 function problem: ERROR: cache lookup failed
On Tue, 7 Jan 2003, Tom Lane wrote: > Achilleus Mantzios <[EMAIL PROTECTED]> writes: > > Hi i had written a C function to easily convert an int4 to its > > equivalent 1x1 int4[] array. > > Does your function know about filling in the elemtype field that was > recently added to struct ArrayType? She has no clue :) Any pointers would be great. Thanx Tom. > > regards, tom lane > == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [PERFORM] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Tom Lane wrote: > There is no way that adding the filter condition should have reduced the > estimated runtime for this plan --- reducing the estimated number of > output rows, yes, but not the runtime. And in fact I can't duplicate My case persists: After clean install of the database, and after vacuum analyze, i get dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where report_date between '2002-01-07' and '2003-01-07'; QUERY PLAN - Index Scan using noonf_date on noon (cost=0.00..16458.54 rows=10774 width=39) (actual time=0.13..205.86 rows=7690 loops=1) Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Total runtime: 233.22 msec dynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where report_date between '2002-01-07' and '2003-01-07' and v_code='4500'; QUERY PLAN Index Scan using noonf_vcode on noon (cost=0.00..3092.52 rows=83 width=39) (actual time=0.15..15.08 rows=373 loops=1) Index Cond: (v_code = '4500'::character varying) Filter: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date)) Total runtime: 16.56 msec (4 rows) I thought PostgreSQL in some sense (hub.org) used FreeBSD, is there any 4.7 FreeBSD server with pgsql 7.3.1 you could use? == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] [PERFORM] 7.3.1 index use / performance
Achilleus Mantzios <[EMAIL PROTECTED]> writes: > My case persists: > After clean install of the database, and after vacuum analyze, > i get Um ... is it persisting? That looks like it's correctly picked the vcode index this time. Strange behavior though. By "clean install" do you mean you rebuilt Postgres, or just did dropdb/createdb/reload data? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] [GENERAL] 7.3.1 function problem: ERROR: cache lookup failed
Achilleus Mantzios wrote: On Tue, 7 Jan 2003, Tom Lane wrote: Does your function know about filling in the elemtype field that was recently added to struct ArrayType? She has no clue :) Any pointers would be great. See construct_array() in src/backend/utils/adt/arrayfuncs.c. HTH, Joe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] [PERFORM] 7.3.1 index use / performance
On Tue, 7 Jan 2003, Tom Lane wrote: > Achilleus Mantzios <[EMAIL PROTECTED]> writes: > > My case persists: > > After clean install of the database, and after vacuum analyze, > > i get > > Um ... is it persisting? That looks like it's correctly picked the > vcode index this time. Strange behavior though. By "clean install" > do you mean you rebuilt Postgres, or just did dropdb/createdb/reload > data? Just dropdb/createdb/reload. > > regards, tom lane > == Achilleus Mantzios S/W Engineer IT dept Dynacom Tankers Mngmt Nikis 4, Glyfada Athens 16610 Greece tel:+30-10-8981112 fax:+30-10-8981877 email: [EMAIL PROTECTED] [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] [PERFORM] 7.3.1 index use / performance
> I am wondering about a compiler bug, or some other peculiarity on your > platform. Can anyone else using FreeBSD try the above experiment and > see if they get different results from mine on 7.3.* (or CVS tip)? On FreeBSD 4.7 I received the exact same results as Tom using the statements shown by Tom. -- Rod Taylor <[EMAIL PROTECTED]> PGP Key: http://www.rbt.ca/rbtpub.asc signature.asc Description: This is a digitally signed message part
Re: [SQL] [PERFORM] 7.3.1 index use / performance
Rod Taylor <[EMAIL PROTECTED]> writes: >> I am wondering about a compiler bug, or some other peculiarity on your >> platform. Can anyone else using FreeBSD try the above experiment and >> see if they get different results from mine on 7.3.* (or CVS tip)? > On FreeBSD 4.7 I received the exact same results as Tom using the > statements shown by Tom. On looking at the code, I do see part of a possible mechanism for this behavior: cost_index calculates the estimated cost for qual-clause evaluation like this: /* * Estimate CPU costs per tuple. * * Normally the indexquals will be removed from the list of restriction * clauses that we have to evaluate as qpquals, so we should subtract * their costs from baserestrictcost. XXX For a lossy index, not all * the quals will be removed and so we really shouldn't subtract their * costs; but detecting that seems more expensive than it's worth. * Also, if we are doing a join then some of the indexquals are join * clauses and shouldn't be subtracted. Rather than work out exactly * how much to subtract, we don't subtract anything. */ cpu_per_tuple = cpu_tuple_cost + baserel->baserestrictcost; if (!is_injoin) cpu_per_tuple -= cost_qual_eval(indexQuals); In theory, indexQuals will always be a subset of the qual list on which baserestrictcost was computed, so we should always end up with a cpu_per_tuple value at least as large as cpu_tuple_cost. I am wondering if somehow in Achilleus's situation, cost_qual_eval() is producing a silly result leading to negative cpu_per_tuple. I don't see how that could happen though --- nor why it would happen on his machine and not other people's. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] double precision to numeric overflow error
is this expected behavior? if so, then why? -tfo db=# create table foo( col timestamp ); db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from foo; date_part --- (0 rows) db=# insert into foo values( current_timestamp ); INSERT 1705954 1 db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from foo; ERROR: overflow on numeric ABS(value) >= 10^9 for field with precision 15 scale 6 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] double precision to numeric overflow error
On Tue, 7 Jan 2003, Thomas O'Connell wrote: > is this expected behavior? if so, then why? I'd guess so if the timestamp value's integer part is 10 digits long since I believe trying to fit that into a numeric(15,6) wouldn't work (9 digits . 6 digits). > db=# create table foo( col timestamp ); > db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from > foo; > date_part > --- > (0 rows) > db=# insert into foo values( current_timestamp ); > INSERT 1705954 1 > db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from > foo; > ERROR: overflow on numeric ABS(value) >= 10^9 for field with precision > 15 scale 6 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] double precision to numeric overflow error
Indeed, it seems as though my inability to count digits was the real problem. Still, does this not strike anyone as a somewhat abstruse error message? -tfo In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Stephan Szabo) wrote: > On Tue, 7 Jan 2003, Thomas O'Connell wrote: > > > is this expected behavior? if so, then why? > > I'd guess so if the timestamp value's integer part is > 10 digits long since I believe trying to fit that into > a numeric(15,6) wouldn't work (9 digits . 6 digits). > > > db=# create table foo( col timestamp ); > > db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from > > foo; > > date_part > > --- > > (0 rows) > > db=# insert into foo values( current_timestamp ); > > INSERT 1705954 1 > > db=# select cast( extract( epoch from col ) as numeric( 15, 6 ) ) from > > foo; > > ERROR: overflow on numeric ABS(value) >= 10^9 for field with precision > > 15 scale 6 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [GENERAL] double precision to numeric overflow error
"Thomas O'Connell" <[EMAIL PROTECTED]> writes: > Indeed, it seems as though my inability to count digits was the real > problem. Still, does this not strike anyone as a somewhat abstruse error > message? How would you phrase it? > ERROR: overflow on numeric ABS(value) >= 10^9 for field with precision > 15 scale 6 It's technically correct, but if you think you can word it better, let's see your proposal... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] double precision to numeric overflow error
Well, it would've immediately (rather than the several minutes it took) given away the problem if it read something like: ERROR: overflow caused by cast of double precision value to numeric without sufficient precision, scale (15, 6) or even, depending on how much detail is available or how much worth assigned to error reporting: ERROR: a double precision value requiring at least precision 16 cannot be cast to a numeric value with precision 15, scale 6. i understand the need for balancing correctness with readability, though. if there's anything technically incorrect about either of my proprosals, they clearly should not be used. i'm usually a fan of breaking error messages down as far as the details will allow them to be correctly broken down. in this situtation, it seems like it is this specific cast (double precision -> numeric) in this specific scenario (insufficient precision) that generates the error. unfortunately, i haven't read the code... :( -tfo In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Tom Lane) wrote: > "Thomas O'Connell" <[EMAIL PROTECTED]> writes: > > Indeed, it seems as though my inability to count digits was the real > > problem. Still, does this not strike anyone as a somewhat abstruse error > > message? > > How would you phrase it? > > > ERROR: overflow on numeric ABS(value) >= 10^9 for field with precision > > 15 scale 6 > > It's technically correct, but if you think you can word it better, > let's see your proposal... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster