Re: [GENERAL] 7.4, 'group by' default ordering?

2004-01-08 Thread Bruno Wolff III
On Thu, Jan 08, 2004 at 15:24:41 -0800,
  Ron St-Pierre <[EMAIL PROTECTED]> wrote:
> Celko uses the SQL92 13.1 rule wording: "Whether a sort key value that 
> is NULL is considered greater or less than a non-NULL value is 
> implementation defined, but all sort key values that are NULL will 
> either be considered greater than all non-NULL values or be considered 
> less than all non-NULL values. There are SQL products that do it either 
> way."  2nd Ed SQL For Smarties.

The part just after that prompted my statement. The text is:
And there are those that have it all wrong; the Sybase family simply
treats the NULLs as if they were really values -- that is, they sort
low for ascending and high for descending.

This seems to me to be saying contradictory things now that I have reread it.
If NULLs are treated as real values, I would expect them to be output
first for one of descending and ascending and last for the other. But
the second part of the statement contradicts this and seems to be saying
that Sybase always emits records with NULL values for the key first.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Natural upgrade path for RedHat 9?

2004-01-08 Thread Tom Lane
"D. Dante Lorenso" <[EMAIL PROTECTED]> writes:
> Is there going to be a RedHat 10?  Or are we all supposed
> to choose a path of RH Enterprise vs Fedora Core?

The current plans do not include a Red Hat 10 --- Enterprise and Fedora
are it.

Now, I have been working for Red Hat long enough to know that their
product plans change constantly.  It could be that some intermediate
product level will re-emerge.  But I wouldn't bet on it.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Simulation output using libpq

2004-01-08 Thread Bruno Wolff III
On Tue, Jan 06, 2004 at 10:50:57 -,
  Daryl Shanley <[EMAIL PROTECTED]> wrote:
> I have some c code that writes results directly to a file as they're
> produced during a simulation. I want to modify the code to write
> directly to a postgres database using libpq. I think I could open a
> connection and insert into the database with a transaction, commiting at
> the end of the simulation. One problem with this approach is that if I
> have a lot of simultaneous simulations running I may run into problems
> with exceeding the maximum number of connections allowed. I would
> welcome any advice

I think it would be more normal to write the information to files and
then have a cleanup process insert those files into the database and
delete the files. This will keep you from having long running transactions
help open for the duration of a simulation. If you don't want a separate
cleanup process, you could have the simulation process take care of
importing the data after the calculations have been completed.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] Announce: Search PostgreSQL related resources

2004-01-08 Thread Glenn Wiorek
> connection failed :(
>
> Dave

I had no problem accessing it -  very nice and responsive !   Great job Oleg
and Teodor.

-- Glenn

- Original Message - 
From: "Dave Cramer" <[EMAIL PROTECTED]>
To: "Oleg Bartunov" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Pgsql Hackers"
<[EMAIL PROTECTED]>
Sent: Monday, January 05, 2004 9:51 AM
Subject: Re: [HACKERS] Announce: Search PostgreSQL related resources


> connection failed :(
>
> Dave
> On Mon, 2004-01-05 at 09:49, Oleg Bartunov wrote:
> > Hi there,
> >
> > I'm pleased to present pilot version of http://www.pgsql.ru - search
system on
> > postgresql related resources. Currently, we have crawled 27 sites,
> > new resources are welcome. It has multi-languages interface (russian,
english)
> > but more languages could be added. We plan to add searchable archive of
> > mailing lists (a'la fts.postgresql.org), russian documentation and
> > WIKI for online documentation, tips, etc.
> >
> > We are welcome your feedback and comments. We need design solution,
icons.
> >
> > This project is hosted at
> > Sternberg Astronomical Institute, Moscow University and supported
> > by Russian Foundation for Basic Research and Delta-Soft LLC.
> >
> > Regards,
> > Oleg
> > _
> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> > Sternberg Astronomical Institute, Moscow University (Russia)
> > Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
> > phone: +007(095)939-16-83, +007(095)939-23-83
> >
> > ---(end of broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> >   subscribe-nomail command to [EMAIL PROTECTED] so that your
> >   message can get through to the mailing list cleanly
> >
> -- 
> Dave Cramer
> 519 939 0336
> ICQ # 1467551
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] int8 version of NUMERIC?

2004-01-08 Thread David Garamond
In Interbase and Firebird, NUMERIC is implemented as 64-bit integer. 
This limits the range to NUMERIC(18, *) but for many uses that's 
adequate. And moreover it's fast and efficient.

Is there a way in PostgreSQL to do something similar, i.e. I want to:

- use 64-bit ints, not string bits or arbitrary precision which is 
dubbed as "much slower than ints" in the documentation;

- use decimals, like NUMERIC(18,4);

- store and retrieve decimal numbers pretty much transparently (e.g. I 
don't want to remember to insert 123456 for 12.3456 and playing with 
multiplying/dividing by 1);

--
dave
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] 7.4, 'group by' default ordering?

2004-01-08 Thread Bruno Wolff III
On Thu, Jan 08, 2004 at 13:42:33 -0600,
  Mike Nolan <[EMAIL PROTECTED]> wrote:
> I notice that 7.4 doesn't do default ordering on a 'group by', so you have
> to throw in an 'order by' clause to get the output in ascending group order.  
> 
> Is this something that most RDB's have historically done (including PG prior 
> to 7.4) but isn't really part of the SQL standard?  

That is because group by is often done with a sort, so rows would naturally
be in that order. If there isn't an order by clause, the set of return
rows can be in any order.

> On a mostly unrelated topic, does the SQL standard indicate whether NULL
> should sort to the front or the back?  Is there a way to force it to
> one or the other independent of whether the order by clause uses ascending or 
> descending order?

In SQL for Smarties, Joe Ceclko says that either NULLs should all be first
or all be last (independent of whether the sort is ascending or descending).
There was also some discussion on how the order is constrained if the sort
is on multiple columns where the value of the first column is NULL, but the
values of other columns are not. I don't have the book here with me now,
but I think the result of the discussion was that within rows with a NULL
value for the first column, they should be sorted by the values in the
later columns.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Compile problem on old Debian Linux with glibc 2.0.7

2004-01-08 Thread Holger Marzen
On Thu, 8 Jan 2004, Tom Lane wrote:

> Holger Marzen <[EMAIL PROTECTED]> writes:
> > When I try to compile PostgreSQL 7.2.4 or 7.4.1 then I get
>
> > |gcc -O2  -Wall -Wmissing-prototypes -Wmissing-declarations
> > |-I../../../src/include -D_GNU_SOURCE   -c hba.c -o hba.o
> > |hba.c: In function `parse_hba':
> > |hba.c:675: warning: implicit declaration of function `gai_strerror'
> > |hba.c:675: warning: format argument is not a pointer (arg 3)
> > |hba.c: In function `ident_unix':
> > |hba.c:1411: storage size of `peercred' isn't known
> > |hba.c:1411: warning: unused variable `peercred'
> > |make[3]: *** [hba.o] Error 1
>
> The configure process assumes that any platform that supplies
> getaddrinfo() will also supply gai_strerror().  Is that not true
> in your version of glibc?  If not, one possible workaround is to
> force use of our own getaddrinfo replacement (#undef HAVE_GETADDRINFO
> and add getaddrinfo.o to LIBOBJS).

I changed the LIBOBJS in src/Makefile.global and put the #undef after
the #define in src/include/pg_config.h.

But unfortunately that's only half of the world domaination. I still
got:

hba.c: In function `ident_unix':
hba.c:1411: storage size of `peercred' isn't known
hba.c:1411: warning: unused variable `peercred'
make[3]: *** [hba.o] Error 1

> Probably the easiest way around the peercred problem is just to disable
> that chunk of code ("#elif defined(SO_PEERCRED)" to "#elif 0" at line

Yep. That's the other half!

> 1409 of hba.c will probably do it).  You may need to dike out the
> corresponding code in client-side libpq as well.

Not neccessary. It compiled and linked.

Thanks, Tom.


-- 
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] order by is ambiguous

2004-01-08 Thread Mike Nolan
> Hmm but the first one has actually no name, it's just casted as datatype
> time. I now realise that casted columns get assigned the datatype as
> name. Should it not show  ?column? as output just like you a "select
> null;" would do?

i think you're confusing what the front end uses as a default column 
heading with what the back end uses as a default column name.  '?column?'
would probably not meet SQL standards.
--
Mike Nolan

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] 7.4, 'group by' default ordering?

2004-01-08 Thread Mike Nolan
I notice that 7.4 doesn't do default ordering on a 'group by', so you have
to throw in an 'order by' clause to get the output in ascending group order.  

Is this something that most RDB's have historically done (including PG prior 
to 7.4) but isn't really part of the SQL standard?  

On a mostly unrelated topic, does the SQL standard indicate whether NULL
should sort to the front or the back?  Is there a way to force it to
one or the other independent of whether the order by clause uses ascending or 
descending order?
--
Mike Nolan

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] deferring/disabling unique index

2004-01-08 Thread Oleg Lebedev
Title: Message



Hi,
 
I need to know if 
there is a way to defer or disable a unique index on a table during an 
update. One way would be to set indisunique to false, perform update and then 
set to true. But, this seems to be an ugly solution.
 
I've posted a similar message 6 months ago and at that time 
deferring unique constraints was on a todo list. I wonder if this has been added 
to 7.4.1 release. If not, what is the best way to disable an index on a 
table?
 
Thanks.
 
Oleg


*

This e-mail may contain privileged or confidential material intended for the named recipient only.
If you are not the named recipient, delete this message and all attachments.
Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.
We reserve the right to monitor e-mail sent through our network. 

*





[GENERAL] why I can call pg_stat_get_backend_idset directly

2004-01-08 Thread Pavel Stehule
Hello

Iam play with SRF function and found difference between behavior system 
function pg_stat_get_backend_idset and own function in plpgsql;

aopk=> \df pg_stat_get_backend_idset
 Datový typ výsledku |   Schéma   |   Jméno   | Datový typ 
parametru
-++---+--
 setof integer   | pg_catalog | pg_stat_get_backend_idset |
aopk=> \df aa
 Datový typ výsledku | Schéma | Jméno | Datový typ parametru
-++---+--
 setof integer   | public | aa|
(1 řádka)

I can select * from pg_stat_get_backend_idset(); 
I can select * from aa()

I understand, but I can call too

select pg_stat_get_backend_idset();
 pg_stat_get_backend_idset
---
 1
 2
(2 řádek)

This isn't right for srf function, or else. I wont write own srf function 
with equal feature - similir like MSSQL procedures. It's possible?

Regards 

Pavel




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Simulation output using libpq

2004-01-08 Thread Ben
Sorry, what's to keep you from increasing the max number of connections?

On Thu, 8 Jan 2004, C G wrote:

> Dear All,
> 
> I have some c code that writes results directly to a file as they're
> produced during a simulation. I want to modify the code to write
> directly to a postgres database using libpq. I think I could open a
> connection and insert into the database with a transaction, commiting at
> the end of the simulation. One problem with this approach is that if I
> have a lot of simultaneous simulations running I may run into problems
> with exceeding the maximum number of connections allowed. I would
> welcome any advice
> 
> Many thanks
> 
> Colin
> 
> _
> It's fast, it's easy and it's free. Get MSN Messenger today! 
> http://www.msn.co.uk/messenger
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
> 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Simulation output using libpq

2004-01-08 Thread C G
Dear All,

I have some c code that writes results directly to a file as they're
produced during a simulation. I want to modify the code to write
directly to a postgres database using libpq. I think I could open a
connection and insert into the database with a transaction, commiting at
the end of the simulation. One problem with this approach is that if I
have a lot of simultaneous simulations running I may run into problems
with exceeding the maximum number of connections allowed. I would
welcome any advice
Many thanks

Colin

_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] 7.3.3 drop table takes very long time

2004-01-08 Thread Tom Lane
Mike Mascari <[EMAIL PROTECTED]> writes:
> Is there any possibility that he's got an open transacation sitting out 
> there for days holding a lock on that table?

Good thought ... but if that was the issue then the DROP would just be
sleeping waiting for the lock, and Eric did say it was consuming CPU ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] order by is ambiguous

2004-01-08 Thread Travel Jadoo
On Thu, 2004-01-08 at 16:09, Tom Lane wrote:
> Travel Jadoo <[EMAIL PROTECTED]> writes:
> > I could not find any discussion on this but imho this seems an erroneous
> > error occuring now in 7.3.4 (after upgrade from 7.2.3):
> 
> > select  null::time, 'test'::varchar as time order by time;
> > ERROR:  ORDER BY 'time' is ambiguous
> 
> What's erroneous about it?  You have two output columns named 'time'.
> 
>   regards, tom lane

Hmm but the first one has actually no name, it's just casted as datatype
time. I now realise that casted columns get assigned the datatype as
name. Should it not show  ?column? as output just like you a "select
null;" would do?

This actually came up as I have multiple time fields but only one was
named time by me.

Regards,
Alfred



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] 7.3.3 drop table takes very long time

2004-01-08 Thread Mike Mascari
Tom Lane wrote:

"Eric Freeman" <[EMAIL PROTECTED]> writes:
 

I started the drop table command yesterday and it's been running for almost 
24 hours.
   

You're stuck in some kind of infinite loop --- there's no way that DROP
should take any noticeable amount of time.  I'm guessing that the system
catalog entries for this particular table are corrupted somehow, but no
idea just how.  It would be worth trying to track it down in case there
is a PG bug lurking.
Can you attach to the looping backend with a debugger and get a stack
trace for us?
 

Is there any possibility that he's got an open transacation sitting out 
there for days holding a lock on that table?

Mike Mascari



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] 7.3.3 drop table takes very long time

2004-01-08 Thread Tom Lane
"Eric Freeman" <[EMAIL PROTECTED]> writes:
> I started the drop table command yesterday and it's been running for almost 
> 24 hours.

You're stuck in some kind of infinite loop --- there's no way that DROP
should take any noticeable amount of time.  I'm guessing that the system
catalog entries for this particular table are corrupted somehow, but no
idea just how.  It would be worth trying to track it down in case there
is a PG bug lurking.

Can you attach to the looping backend with a debugger and get a stack
trace for us?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Queries taking a very long time

2004-01-08 Thread Richard Huxton
On Thursday 08 January 2004 16:15, Eric Freeman wrote:

Please try not to post in html

> I'm using version 7.3.3. I do a lot of modification to some of the
> tables in my database. There's a table that I didn't run vacuum on for a
> while and I ran some big queries on it. It was taking a long time so I
> aborted the query and tried running a vacuum on the table. It ran for
> around 48 hours before I aborted that. I decided to just drop the table and
> re-create it since that seems like it would be the fastest thing. It's been
> running the drop table command for almost 24 hours now. Is there any way to
> speed this process up?

What's the issue - CPU saturated? Disk saturated? system swapping?

Possibly your best bet is to copy the data out of the table, TRUNCATE it then 
recreate it.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] 7.3.3 drop table takes very long time

2004-01-08 Thread Eric Freeman
I'm trying to drop a table and it's taking a very long time. There has been 
a lot of modification to the table and it has a lot of old data still being 
used as a result of not using the vacuum function enough. I ran an insert 
into and it was taking a long time (ran for about 48 hours) so I aborted it. 
I tried vacuuming it and that ran for about the same amount of time before I 
aborted. I figured the fastest ting would be to drop the table and re-create 
it.
I tried running a pg_dump and it ran for about 4 days without putting any of 
the data to the output file.
I started the drop table command yesterday and it's been running for almost 
24 hours.
I know it's processing because it's using up the CPU and I can run 
transactions on all of the other tables except this one. It has about 
132,000 records in the table.
Any ideas on how I can speed this up?
Can I go into the /data directory and find the file that contains that table 
and delete that? If so, how would I go about doing this?
Eric

_
Expand your wine savvy — and get some great new recipes — at MSN Wine. 
http://wine.msn.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Queries taking a very long time

2004-01-08 Thread Marc G. Fournier
On Thu, 8 Jan 2004, Eric Freeman wrote:

>
> Hi,
>
> I'm using version 7.3.3. I do a lot of modification to some of the
> tables in my database. There's a table that I didn't run vacuum on for a
> while and I ran some big queries on it. It was taking a long time so I
> aborted the query and tried running a vacuum on the table. It ran for
> around 48 hours before I aborted that. I decided to just drop the table
> and re-create it since that seems like it would be the fastest thing.
> It's been running the drop table command for almost 24 hours now. Is
> there any way to speed this process up? Can I go into the data directory
> and delete some files?

Have you checked logs to make sure that its not issuing errors?  Is the
DROP process doing anything?  All DROP TABLE does is rm the appropriate
files on teh file system and update various system tables to remove
references to it ... how big is this file, that its taking >24hrs to
*delete* it from the file system?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Queries taking a very long time

2004-01-08 Thread Eric Freeman

Hi,
I'm using version 7.3.3. I do a lot of modification to some of the tables in my database. There's a table that I didn't run vacuum on for a while and I ran some big queries on it. It was taking a long time so I aborted the query and tried running a vacuum on the table. It ran for around 48 hours before I aborted that. I decided to just drop the table and re-create it since that seems like it would be the fastest thing. It's been running the drop table command for almost 24 hours now. Is there any way to speed this process up? Can I go into the data directory and delete some files? 
This is on a developmental database so losing some data isn't extremely critical but I would like to know what data I'm losing if I lose data other than just the table. There hasn't been any activity on the database since I started trying to drop the table.  Make your home warm and cozy this winter with tips from MSN House & Home.  


Re: [GENERAL] order by is ambiguous

2004-01-08 Thread Tom Lane
Travel Jadoo <[EMAIL PROTECTED]> writes:
> I could not find any discussion on this but imho this seems an erroneous
> error occuring now in 7.3.4 (after upgrade from 7.2.3):

> select  null::time, 'test'::varchar as time order by time;
> ERROR:  ORDER BY 'time' is ambiguous

What's erroneous about it?  You have two output columns named 'time'.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Compile problem on old Debian Linux with glibc 2.0.7

2004-01-08 Thread Tom Lane
Holger Marzen <[EMAIL PROTECTED]> writes:
> When I try to compile PostgreSQL 7.2.4 or 7.4.1 then I get

> |gcc -O2  -Wall -Wmissing-prototypes -Wmissing-declarations
> |-I../../../src/include -D_GNU_SOURCE   -c hba.c -o hba.o
> |hba.c: In function `parse_hba':
> |hba.c:675: warning: implicit declaration of function `gai_strerror'
> |hba.c:675: warning: format argument is not a pointer (arg 3)
> |hba.c: In function `ident_unix':
> |hba.c:1411: storage size of `peercred' isn't known
> |hba.c:1411: warning: unused variable `peercred'
> |make[3]: *** [hba.o] Error 1

The configure process assumes that any platform that supplies
getaddrinfo() will also supply gai_strerror().  Is that not true
in your version of glibc?  If not, one possible workaround is to
force use of our own getaddrinfo replacement (#undef HAVE_GETADDRINFO
and add getaddrinfo.o to LIBOBJS).

Probably the easiest way around the peercred problem is just to disable
that chunk of code ("#elif defined(SO_PEERCRED)" to "#elif 0" at line
1409 of hba.c will probably do it).  You may need to dike out the
corresponding code in client-side libpq as well.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Optimize query: time of "single * IN(many)" > time of "many * IN(single)"

2004-01-08 Thread Tom Lane
"Paul Janssen" <[EMAIL PROTECTED]> writes:
> Can anyone help me out with the following situation:
>(a) a single query with 550 id's in the IN-clause resulting into 800+ 
> seconds;
>(b) 550 queries with a single id in the IN-clause resulting into overall 
> time of <60 seconds;
> The table consists of 950.000 records, and the resultset consists of 205.000 
> records.

> Why is there such an extreme difference in time?

Most likely the planner is opting not to use an indexscan in the first
case.  Could we see EXPLAIN ANALYZE results for both cases?  Also, try
"SET enable_seqscan TO OFF" and then repeat EXPLAIN ANALYZE for case (a).

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] order by is ambiguous

2004-01-08 Thread Travel Jadoo
I could not find any discussion on this but imho this seems an erroneous
error occuring now in 7.3.4 (after upgrade from 7.2.3):

select  null::time, 'test'::varchar as time order by time;
ERROR:  ORDER BY 'time' is ambiguous

The solution is to name the time datatype e.g.

select  null::time as xyz, 'test'::varchar as time order by time;

Regards,
Alfred


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Optimize query: time of "single * IN(many)" > time

2004-01-08 Thread Alvaro Herrera
On Thu, Jan 08, 2004 at 06:28:14AM -0500, Dave Smith wrote:
> Firstly you should always provide an explain from your query before
> posting to this list.

You mean "while posting", because he can't possible provide the explain
before having the means to do so, can he? :-)

> I think the problem is that in <7.4 PG did not use
> indexes for IN queries. This has been fixed in 7.4.

I think what was done is to optimize queries like IN (SELECT ...) but
there has not been improvement for IN (1,2,3, ... 550) like he
appears to be doing.

Maybe something to try is putting the IDs in a (temp?) table and using
the first form.

-- 
Alvaro Herrera ()
"Cada quien es cada cual y baja las escaleras como quiere" (JMSerrat)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly