Re: [HACKERS] invalidating cached plans

2005-03-13 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I would like to see this folded together with creation of a centralized
>> plan caching module.

> Interesting. Can you elaborate on how you'd envision call sites making 
> use of this module?

I hadn't really gotten as far as working out a reasonable API for the
module.  The $64 question seems to be what is the input: a textual query
string, a raw parse analysis tree, or what?  And what sort of key does
the caller want to use to re-find a previously cached plan?

Probably the first thing to do is look around at the plausible users of
this thing and see what they'd find most convenient.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] int64/double for time/timestamp

2005-03-13 Thread Thomas Hallgren
Teodor Sigaev wrote:
Urgh.  This is clearly a bug.  All the code in utils/adt seems to be
correctly set up to treat TimeADT as an integral value, but then the two
macros quoted are converting the value to float8 and back again ... so
what's actually on disk is the float8 equivalent of what the int64 value
is supposed to be :-(.  As long as the macros are used *consistently* to
fetch and store time datums, no one would notice --- you could only see
a difference if the int64 values got large enough to not be represented
completely accurately as floats, which I believe is impossible for type
time.
So the fact that you're seeing a bug in btree_gist suggests that
someplace you're cheating and bypassing the FooGetDatum/DatumGetFoo
macros.
We'll obviously want to fix this going forward for efficiency reasons,
but it's an initdb-forcer because it'll change the on-disk
representation of time columns.  So we can't change it in 8.0 or before.

So, will we do it? I can do, but I don't know: Is there a place which 
contains storage version (except file PG_VERSION)?


When making PL/Java dynamically adapt to the setting of 
integer-datetimes, I too was bitten by this bug. Is it safe to assume 
that the fix for this will arrive in 8.1.0?

Regards,
Thomas Hallgren

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] invalidating cached plans

2005-03-13 Thread Neil Conway
Tom Lane wrote:
I would like to see this folded together with creation of a centralized
plan caching module.
Interesting. Can you elaborate on how you'd envision call sites making 
use of this module?

The difficulty with this after-the-fact approach is that the finished
plan tree may contain no reference to some objects that it in fact
depends on.  SQL functions that have been inlined are the classic
example, but consider also the idea that a plan may have been made on
the strength of a table constraint (see nearby thread about partitioning)
and needs to be invalidated if that constraint goes away.
Hmm, good point. I'm happy to blow away all cached plans when a table 
constraint changes, so that resolves that, but I agree we'll need to 
handle inlined functions specially. But perhaps it is best to not rely 
on after-the-fact Plan analysis at all, and build the capability to 
record plan dependencies directly into the planner.

-Neil
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] invalidating cached plans

2005-03-13 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> PostgreSQL should invalidate a cached query plan when one of the objects
> the plan depends upon is modified.

Agreed.

> Implementation sketch:

I would like to see this folded together with creation of a centralized
plan caching module.  We currently have ad-hoc plan caches in
ri_triggers.c, plpgsql, prepare.c, and probably other places.  There
is no good reason to keep reinventing that wheel, especially not given
that plan invalidation raises the complexity of the wheel by a considerable
amount.

> - when creating a plan, allow the caller to specify whether dependencies
> should be tracked or not;

I would prefer not to tie this behavior to plan creation per se, but to
plan caching.  And in a cached plan there is no "don't track" option.
HOWEVER, see next comment ...

> - to install dependencies for a plan, walk the plan's tree and remember
> the OIDs of any system objects it references.

The difficulty with this after-the-fact approach is that the finished
plan tree may contain no reference to some objects that it in fact
depends on.  SQL functions that have been inlined are the classic
example, but consider also the idea that a plan may have been made on
the strength of a table constraint (see nearby thread about partitioning)
and needs to be invalidated if that constraint goes away.

One possible approach is to do the invalidation on a sufficiently coarse
grain that we don't care.  For example, I would be inclined to make any
change in a table's schema invalidate all plans that use that table at
all; that would then subsume the constraint problem for instance.  This
doesn't solve the inlined function problem however.

For inlined functions, the only answer I see is for the planner to
somehow decorate the plan tree with a list of things it consulted
even though they might not be directly referenced in the finished
plan.

> Both cached plans and their dependencies are backend-local.

Agreed.

> - it is the responsibility of the call site managing the prepared plan
> to check whether a previously prepared plan is invalid or not -- and to
> take the necessary steps to replan it when needed.

Again, I'd rather see that folded into a central plan cache mechanism.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] options in conninfo

2005-03-13 Thread Christopher Kings-Lynne
Hi,
Using libpq PQconnect function, what is the syntax for the 'options' 
entry in the conninfo?  I think the docs should be updated to give an 
example..

http://www.postgresql.org/docs/8.0/interactive/libpq.html#LIBPQ-CONNECT
Thanks,
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Bruno Wolff III
On Mon, Mar 14, 2005 at 01:52:59 -0500,
  Tom Lane <[EMAIL PROTECTED]> wrote:
> Bruno Wolff III <[EMAIL PROTECTED]> writes:
> > If someone did a naive implementation of first() and last() aggregates
> > for 8.1, is that something that would likely be accepted?
> 
> For the purpose that Greg is suggesting, these would have no advantage
> over min() or max() --- since the system wouldn't know how to optimize
> them --- and they'd be considerably less standard.  So my inclination
> would be to say it's a waste of effort.

The case I was thinking of were datatypes without a defined ordering
where max and min wouldn't be usable. But if GROUP BY was going to
changed to allow any columns if the primary key was used in the GROUP
BY clause, I can't see any use for those functions.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] signed short fd

2005-03-13 Thread Tom Lane
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> So is there any special reason we don't worry that convert an integer to
> short will not lose data?

It's not possible for that to happen unless the user has set
max_files_per_process to more than 32K, so I'm not particularly
worried.  Do you know of any platforms that would be unlikely to
go belly-up with dozens or hundreds of PG backends each trying to use
tens of thousands of open files?

While I agree that storing this as int16 is micro-optimization,
I don't see it as likely to be a problem in the foreseeable
future.  If it makes you feel better, we can constrain
max_files_per_process to 32K in guc.c.

> Maybe we make the assumption that all OS will
> implement "fd" as an array index

The POSIX spec requires open() to assign fd's consecutively from zero.
http://www.opengroup.org/onlinepubs/007908799/xsh/open.html

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Tom Lane
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> If someone did a naive implementation of first() and last() aggregates
> for 8.1, is that something that would likely be accepted?

For the purpose that Greg is suggesting, these would have no advantage
over min() or max() --- since the system wouldn't know how to optimize
them --- and they'd be considerably less standard.  So my inclination
would be to say it's a waste of effort.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> On Wed, 2005-03-09 at 21:21 -0500, Tom Lane wrote: 
>> Comments?  Can anyone confirm whether DB2 or other databases allow
>> ungrouped column references with HAVING?

> Mysql treats ungrouped columns as an assertion that those columns will all be
> equal for the group and it can pick an arbitrary one. Essentially it does an
> implicit "first(x) AS x". The expected use case is for things like:

> select count(*),a.*
>   from a,b
>  where a.pk = b.a_fk
>  group by a.pk

[ Your comment is completely unrelated to my question, but anyway: ]

Well, that query is actually legitimate per SQL99 (though not per SQL92)
if a.pk actually is a primary key.  A correct implementation of SQL99
would deduce that the columns of A are all functionally dependent on
a.pk and not make you list them in GROUP BY.  I dunno whether mysql goes
through that pushup or whether they just assume the user knows what he's
doing (though from what I know of their design philosophy I bet the
latter).

I'm not sure if we have a TODO item about working on the SQL99 grouping
rules, but I'd like to see us implement at least the simpler cases,
such as this one.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] invalidating cached plans

2005-03-13 Thread Neil Conway
PostgreSQL should invalidate a cached query plan when one of the objects
the plan depends upon is modified.
This is the common case of a more general problem: a query plan depends
on various parts of the environment at plan-creation time. That
environment includes the definitions of database objects, but also GUC
variables (most importantly search_path, but also optimizer-tuning
variables for example), the state of database statistics, and so on.
I'll leave resolution of the more general problem to someone else -- I
think if we can manage to invalidate plans automatically when dependent
objects change, that's better than nothing.
Implementation sketch:
- when creating a plan, allow the caller to specify whether dependencies
should be tracked or not; we want to track dependencies for long-lived
plans like cached plans created by PL/PgSQL, named PREPARE plans (both
protocol-level and via SQL), and so forth. We needn't track dependencies
for exec_simple_query(), and so on.
- to install dependencies for a plan, walk the plan's tree and remember
the OIDs of any system objects it references. Both cached plans and 
their dependencies are backend-local.

- if we receive a shared invalidation message for a relation referenced
by a plan, mark the plan as invalid (a new boolean flag associated with
a prepared Plan). If the sinval queue overflows, mark all plans as
invalid (well, all the plans we're tracking dependencies for, anyway). I 
haven't looked too closely at whether the existing sinval message types 
will be sufficient for invalidating cached plans; some modifications 
might be needed.

- it is the responsibility of the call site managing the prepared plan
to check whether a previously prepared plan is invalid or not -- and to
take the necessary steps to replan it when needed.
Comments welcome.
-Neil

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Bruno Wolff III
On Mon, Mar 14, 2005 at 00:35:32 -0500,
  Greg Stark <[EMAIL PROTECTED]> wrote:
> 
> Bruno Wolff III <[EMAIL PROTECTED]> writes:
> 
> > If someone did a naive implementation of first() and last() aggregates
> > for 8.1, is that something that would likely be accepted?
> 
> You mean like this?
> 
>  CREATE FUNCTION first_accum(anyelement,anyelement) RETURNS anyelement as 
> 'select coalesce($1,$2)' LANGUAGE SQL;
>  CREATE AGGREGATE first (BASETYPE=anyelement, SFUNC=first_accum, STYPE = 
> anyelement);
> 
> Though I suspect it would be faster as a native C implementation.

Pretty much that idea.

It just seemed odd to me that first and last weren't implemented, since they
seemed to be simple and could be used in cases where max or min couldn't
(because of no ordering) to do the same thing.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Null Value Stored for Date e TimeStamp

2005-03-13 Thread Tom Lane
Fernando Ferreira <[EMAIL PROTECTED]> writes:
> I would like know about the value stored for null value for fields date 
> and timestamp.

There isn't any; we don't store anything at all for a null.

I don't think those other DBs you mention equate a null to some
particular randomly chosen date, either.  You might be dealing with
some broken application software that uses such a thing as a way to
avoid dealing with true nulls ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Greg Stark

Bruno Wolff III <[EMAIL PROTECTED]> writes:

> If someone did a naive implementation of first() and last() aggregates
> for 8.1, is that something that would likely be accepted?

You mean like this?

 CREATE FUNCTION first_accum(anyelement,anyelement) RETURNS anyelement as 
'select coalesce($1,$2)' LANGUAGE SQL;
 CREATE AGGREGATE first (BASETYPE=anyelement, SFUNC=first_accum, STYPE = 
anyelement);

Though I suspect it would be faster as a native C implementation.


-- 
greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] signed short fd

2005-03-13 Thread Qingqing Zhou
We have the following definition in fd.c:

typedef struct vfd
{
 signed short fd;   /* current FD, or VFD_CLOSED if none */
 ...
} Vfd;

but seems we use Vfd.fd as an integer, say in fileNameOpenFile() we have:

 vfdP->fd = BasicOpenFile(fileName, fileFlags, fileMode);

So is there any special reason we don't worry that convert an integer to
short will not lose data? Maybe we make the assumption that all OS will
implement "fd" as an array index and is at most 2^16 this big, but why not
use an integer?

Regards,
Qingqing






---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] date_trunc problem in HEAD

2005-03-13 Thread Robert Creager

OK.  I believe the following function provides the correct functionality. 
Agree/disagree?  If it's good, I'll figure out how to convert this little
monster to C...

CREATE OR REPLACE FUNCTION 
date_trunc_week(timestamp without time zone)
RETURNS timestamp without time zone
AS '
DECLARE
   reading_time ALIAS FOR $1;
   year timestamp;
   dow integer;
   temp interval;
   weeks text;
   adjust text;
BEGIN
   year := date_trunc( ''year''::text, reading_time );
   dow := date_part( ''dow'', year );
   IF dow >= 4 THEN
  adjust := 1 - dow || '' day'';
   ELSIF dow != 1 THEN
  adjust := dow - 6 || '' day'';
   ELSE
  adjust := ''0 day'';
   END IF;
   temp := reading_time - (year + adjust::interval); 
   weeks := trunc(date_part( ''days'', temp ) / 7) || '' weeks'';
   RETURN year + adjust::interval + weeks::interval;
END;
' LANGUAGE plpgsql;


select date_trunc_week( '2004-01-01' ); -- 2003-12-29 00:00:00
select date_trunc_week( '2005-01-01' ); -- 2004-12-27 00:00:00
select date_trunc_week( '2005-06-01' ); -- 2005-05-30 00:00:00
select date_trunc_week( '2006-01-01' ); -- 2005-12-26 00:00:00
select date_trunc_week( '2007-01-01' ); -- 2007-01-01 00:00:00

Thanks for your input on this Kurt.

Cheers,
Rob

-- 
 21:48:49 up 48 days,  3:05,  4 users,  load average: 3.80, 3.13, 2.82
Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004


pgp7qKDfwTHuS.pgp
Description: PGP signature


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-13 Thread Tom Lane
I wrote:
> Since ExecProject operations within a nest of joins are going to be
> dealing entirely with Vars, I wonder if we couldn't speed matters up
> by having a short-circuit case for a projection that is only Vars.
> Essentially it would be a lot like execJunk.c, except able to cope
> with two input tuples.  Using heap_deformtuple instead of retail
> extraction of fields would eliminate the O(N^2) penalty for wide tuples.

Actually, we already had a pending patch (from Atsushi Ogawa) that
eliminates that particular O(N^2) behavior in another way.  After
applying it, I get about a factor-of-4 reduction in the runtime for
Miroslav's example.

ExecEvalVar and associated routines are still a pretty good fraction of
the runtime, so it might still be worth doing something like the above,
but it'd probably be just a marginal win instead of a big win.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Bruno Wolff III
On Sun, Mar 13, 2005 at 23:24:18 -0500,
  Greg Stark <[EMAIL PROTECTED]> wrote:
> 
> I've noticed quite frequently scenarios where this idiom would be very handy.
> I usually either end up rewriting the query to have nested subqueries so I can
> push the grouping into the subquery. This doesn't always work though and
> sometimes I end up listing several, sometimes dozens, of columns like
> "first(x) AS x" or else end up

If someone did a naive implementation of first() and last() aggregates
for 8.1, is that something that would likely be accepted?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Greg Stark


> On Wed, 2005-03-09 at 21:21 -0500, Tom Lane wrote: 
> > Comments?  Can anyone confirm whether DB2 or other databases allow
> > ungrouped column references with HAVING?

Mysql treats ungrouped columns as an assertion that those columns will all be
equal for the group and it can pick an arbitrary one. Essentially it does an
implicit "first(x) AS x". The expected use case is for things like:

select count(*),a.*
  from a,b
 where a.pk = b.a_fk
 group by a.pk


I've noticed quite frequently scenarios where this idiom would be very handy.
I usually either end up rewriting the query to have nested subqueries so I can
push the grouping into the subquery. This doesn't always work though and
sometimes I end up listing several, sometimes dozens, of columns like
"first(x) AS x" or else end up

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] TODO item: support triggers on columns

2005-03-13 Thread Bruce Momjian
Greg Sabino Mullane wrote:
[ There is text before PGP section. ]
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> 
> > I'd like to start working on the following TODO item:
> > Referential Integrity / Support triggers on columns
> >
> > Is somebody else already working on this?
> 
> Sorry for not jumping in earlier. As Rob said, I am working on
> column-level support for triggers. I did not have my name addded
> to the TODO list as I wanted to at least get a proof of concept
> going first to make sure I could undertake it. I am partway there,
> but there are still a lot of rough edges. I guess at this point
> I should formally put my name on the TODO, and can bail out if
> it gets over my head?

Added to TODO.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Terry Yapt
Results from Oracle 9.2.0.3 (9.2 patch 2)

===
Connected to Oracle9i Release 9.2.0.3.0 
Connected as system


SQL> 
SQL> DROP TABLE TESTTAB;

DROP TABLE TESTTAB

ORA-00942: table or view does not exist

SQL> create table TESTtab (col integer);

Table created

SQL> select 1 as col from TESTtab having 1=0;

   COL
--

SQL> select 1 as col from TESTtab having 1=1;

   COL
--

SQL> insert into TESTtab values(1);

1 row inserted

SQL> insert into TESTtab values(2);

1 row inserted

SQL> select 1 as col from TESTtab having 1=0;

   COL
--

SQL> select 1 as col from TESTtab having 1=1;

   COL
--
 1
 1

SQL> DROP TABLE TESTTAB;

Table dropped

SQL> 
===



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread John R Pierce
select 1 from tab having 1=1;
returns 2 rows
I'm curious whats in those two rows... {{1} {1}}  ?
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Kurt Roeckx
On Fri, Mar 11, 2005 at 12:58:28PM -0500, Bruce Momjian wrote:
> Are we still bumping the libpq major version number for 8.0.2?  I think
> it is a bad idea because we will require too many client apps to be
> recompiled, and we have had few problem reports.
> 
> We do need to bump the major version number for 8.1 and I am doing that
> now.
> 
> One new problem I see is that changes to libpgport could affect client
> apps that call libpq because they pull functions from pgport via libpq. 
> For example, now that snprintf is called pg_snprintf, my initdb failed
> in the regression tests because the the new initdb binary used
> pg_snprintf but the installed libpq (ld.so.conf) didn't have it yet.

Does initdb call pg_snprintf directly?  Or does it call some
libpq function that calls it?

> The bottom line is that we only used to require major libpq version
> bumps when we changed the libpq API.  Now, with libpgport, I am
> concerned that changes in libpgport also will require a major version
> bump.  This adds support to the idea that we will have to do a major
> libpq bump for every major release.

Soname changes really should only happen in case of API or ABI
changes and I think you really should try to avoid them.  I'm not
sure why you think it's required now.

Also, I think it's alot better to actually do soname changes to
libraries if things can break.  I don't see having 2 library
versions around as a problem.  And I'd rather have something I
just know is not going to work.


Kurt


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] One vacuum full is not enough.

2005-03-13 Thread Christopher Browne
[EMAIL PROTECTED] (Gaetano Mendola) wrote:
> Hi all,
> running a 7.4.5 it happen to me with another table
> where a single vacuum full was not freeing enough pages,
> here the verbose vacuum full, as you can see only at
> the end:  truncated 8504 to 621 pages.
>
> I use pg_autovacuum and it's not enough. I'll schedule
> again a nightly vacuum full.

That doesn't follow as a legitimate inference.

It is fairly well certain that what you are "suffering" from are some
long running transactions that prevent dead tuples from being vacuumed
out.

That indicates that your focus on VACUUM FULL is a focus on a red
herring.

You can see that pretty easily; you're seeing VACUUM FULL requests not
"doing the trick" because the old transaction prevents _ANY_ kind of
vacuum from clearing out tuples that were 'killed' after that
transaction started.

The problem isn't particularly with your vacuum policy; it is with the
transaction handling behaviour in your application.  No vacuum policy
will ever really be "enough" until you can get the long running
transactions under control.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "acm.org")
http://linuxdatabases.info/info/lsf.html
"I've run  DOOM more in  the last  few days than  I have the  last few
months.  I just love debugging ;-)" -- Linus Torvalds

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Null Value Stored for Date e TimeStamp

2005-03-13 Thread Fernando Ferreira
Hi,
I would like know about the value stored for null value for fields date 
and timestamp.

Sample
 SQL Server 1753/01/01
  Oracle 0001/01/01
 Informix   1899/01/01
Visual Foxpro  DBF   1899/12/30
I used a tool for migrate database and informed the nullvalues for date 
fields in source and destine database.

I 'm convert DBF to PostgreSQL and the values for null dates from DBF 
are stored as 1899/12/30 in PostgreSQL tables.

Thanks
Fernando Ferreira
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] A bad plan

2005-03-13 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Josh Berkus wrote:
> Gaetano,
> 
> 
>>Hi all,
>>running a 7.4.5 engine, I'm facing this bad plan:
> 
> 
> Please take this to the PGSQL-PERFORMANCE mailing list; that's what that list 
> exists for.
> 
> Or IRC, where I know your are sometimes.  But not -Hackers.

Sorry, I was convinced to have sent this email to performances ( as I do
usually ).



Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCLXih7UpzwH2SGd4RApCYAKCS/1qPYFs7GABfpwAO0c51kg+daQCg/J66
vwv2Z92GtFvOwKFwa8jC838=
=BlCp
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [BUGS] We are not following the spec for HAVING without GROUP

2005-03-13 Thread Mark Shewmaker
On Wed, 2005-03-09 at 21:21 -0500, Tom Lane wrote: 
> Comments?  Can anyone confirm whether DB2 or other databases allow
> ungrouped column references with HAVING?

In Sybase:

1> select 2 as id, max(myfield) from mytable where 2=1
2> go
 id
 --- --
   2 NULL

(1 row affected)
1> select 2 as id, max(myfield) from mytable having 2=1
2> go
 id
 --- --

(0 rows affected)

-- 
Mark Shewmaker
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] We are not following the spec for HAVING without GROUP BY

2005-03-13 Thread Michael Wimmer
Just a quick test on the DBMS I have available at work.
IBM Informix Dynamic Server Version 10.00.TC1TL
Error: The column (id) must be in the GROUP BY list.
Oracle 9.2.0.11
Returns the same records as if where would be used.
MSSQL Express 2005 Beta February TP
Error: Column 'tab.id' is invalid in the select list because it is not 
contained in either an aggregate function or the GROUP BY clause.

MySQL 5.0.2
Returns the same records as if where would be used.
Sybase 12.5
Returns the same records as if where would be used.
Firebird 1.5.2
Error: GDS Exception. 335544569. Dynamic SQL Error
SQL error code = -104
No message for code 335544824 found.
null
Best regards,
Michael Wimmer
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [HACKERS] One vacuum full is not enough.

2005-03-13 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hannu Krosing wrote:
> Ühel kenal päeval (teisipäev, 8. märts 2005, 00:52+0100), kirjutas
> Gaetano Mendola:
> 
>>Hi all,
>>running a 7.4.5 it happen to me with another table
>>where a single vacuum full was not freeing enough pages,
>>here the verbose vacuum full, as you can see only at
>>the end:  truncated 8504 to 621 pages.
>>
>>I use pg_autovacuum and it's not enough. I'll schedule
>>again a nightly vacuum full.
> 
> 
> You may have too few fsm pages, so new inserts/updates don't use all the 
> pages freed by vacuums.
> 

Is not this the case:


 max_fsm_pages  | 200
 max_fsm_relations  | 1000

and when I was doing the vacuum full these settings were above the
real needs.



Regards
Gaetano Mendola




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCMA++7UpzwH2SGd4RAi0hAJwLBpSWlDTQoAWglK8Dg/IoY3fb8QCfTjKU
wxDSc2VG7B5pRPfCfQqxRtk=
=Ce+9
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without GROUP BY

2005-03-13 Thread Gill, Jerry T.
Here is your Sql run in a DB2 database.
connect to phoenix

   Database Connection Information

 Database server= DB2/LINUX 8.1.5
 SQL authorization ID   = GILL
 Local database alias   = PHOENIX


create table tab (col integer)
DB2I  The SQL command completed successfully.

select 1 from tab having 1=0

1
---

  0 record(s) selected.


select 1 from tab having 1=1

1
---
  1

  1 record(s) selected.


insert into tab values(1)
DB2I  The SQL command completed successfully.

insert into tab values(2)
DB2I  The SQL command completed successfully.

select 1 from tab having 1=0

1
---

  0 record(s) selected.


select 1 from tab having 1=1

1
---
  1

  1 record(s) selected.

Hope that helps.
-Jgill

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: Thursday, March 10, 2005 11:45 AM
To: pgsql-hackers@postgresql.org; pgsql-bugs@postgresql.org
Subject: Re: [BUGS] [HACKERS] We are not following the spec for HAVING
without GROUP BY 


I wrote:
> This is quite clear that the output of a HAVING clause is a "grouped
> table" no matter whether the query uses GROUP BY or aggregates or not.

> What that means is that neither the HAVING clause nor the targetlist
> can use any ungrouped columns except within aggregate calls; that is,
>   select col from tab having 2>1
> is in fact illegal per SQL spec, because col isn't a grouping column
> (there are no grouping columns in this query).

Actually, it's even more than that: a query with HAVING and no GROUP BY
should always return 1 row (if the HAVING succeeds) or 0 rows (if not).
If there are no aggregates, the entire from/where clause can be thrown
away, because it can have no impact on the result!

Would those of you with access to other DBMSes try this:

create table tab (col integer);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
insert into tab values(1);
insert into tab values(2);
select 1 from tab having 1=0;
select 1 from tab having 1=1;

I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
from the 4 selects --- that is, the contents of tab make no difference
at all.  (MySQL returns 0, 0, 0, and 2 rows, so they are definitely
copying our mistake...)

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] [ADMIN] Too frequent warnings for wraparound failure

2005-03-13 Thread Milen A. Radev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> I wrote:
> 
>>"Milen A. Radev" <[EMAIL PROTECTED]> writes:
>>
>>>I review the log every morning. In the beginning I got "wraparound
>>>failure" warnings every third day. But from a week I got those warnings
>>>every day. Well we have one table in one database where there are a lot
>>>of inserts, but not that many - around 30-40 thousand per day.
> 
> 
>>Are you really doing half a billion transactions a day?
> 
> 
> I thought of another mechanism that wouldn't require such a preposterous
> load, only half a billion transactions since initdb.  (How old is this
> installation, anyway, and which PG version?)

It's created on February 16 and is 7.4.7 on Debian GNU/Linux 3.0 (Woody)
(kernel 2.6.10).

> 
> If you are creating new databases every day and you do it by cloning
> template0, then the new databases would come into existence with 
> datfrozenxid equal to template0's.  Once template0 is more than half a
> billion transactions old, you'd start seeing the warning.

No, we do not do anything like that. We imported all databases from a
backup created by using pg_dump.

> 
> This is relatively harmless, but probably we should try to mask it.
> We could make CREATE DATABASE set datfrozenxid to current time when
> cloning a database that has datallowconn false, on the assumption that
> the source DB is entirely frozen and so there's nothing to vacuum yet.
> 
>   regards, tom lane



- --
Milen A. Radev
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCMHqzqGlhYx0/sboRAr1dAKCB4vGHvzwsQ9zsM20y3hLrOSfqsQCgqVai
8RrSaVHjsoktDriCwCRWjfc=
=ju16
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] Grant ALL on schema

2005-03-13 Thread Hemapriya
Hi,

Can anybody know how the following statement work.

Grant ALL on SCHEMA test to user 'user1';

will the user be able to have all the privileges on
all the objects/tables on schema test? Or he can only
create new objects in that schema.

Thanks
Priya




__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without GROUP BY

2005-03-13 Thread Mark Shewmaker
On Thu, Mar 10, 2005 at 12:44:50PM -0500, Tom Lane wrote:
> 
> Would those of you with access to other DBMSes try this:
> 
> create table tab (col integer);
> select 1 from tab having 1=0;
> select 1 from tab having 1=1;
> insert into tab values(1);
> insert into tab values(2);
> select 1 from tab having 1=0;
> select 1 from tab having 1=1;
> 
> I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
> from the 4 selects --- that is, the contents of tab make no difference
> at all.

Sybase ASE version 12.5.2 returns 0, 0, 0, and 1 rows.

A plain "select 1 from tab" returns zero rows when tab is empty.

-- 
Mark Shewmaker
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Kurt Roeckx
On Fri, Mar 11, 2005 at 01:29:46PM -0500, Bruce Momjian wrote:
> Kurt Roeckx wrote:
> > 
> > Does initdb call pg_snprintf directly?  Or does it call some
> > libpq function that calls it?
> 
> With the current CVS, initdb calls pg_snprintf() on my platform which
> doesn't support %$ natively on my libc printf.  Now, initdb could pull
> from pgport itself but I think it pulled from libpq first.  Perhaps we
> should reorder how those libraries appear in the link line but I think
> that would fix just this case, not the more general one of pg client
> apps.

Do "client apps" ever link to pgport itself?  I assume only
"internal" applictions link to it?

I assume libpq is staticly linked to pgport and is exporting
symbols it shouldn't.  Can we prevent it from exporting those
symbols?


Kurt


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [BUGS] [HACKERS] We are not following the spec for HAVING without

2005-03-13 Thread Gary Doades
Tom Lane wrote:
Would those of you with access to other DBMSes try this:
create table tab (col integer);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
insert into tab values(1);
insert into tab values(2);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
I claim that a SQL-conformant database will return 0, 1, 0, and 1 rows
MS SQL Server 2000 returns 0, 1, 0 and 1 rows correctly.
Cheers,
Gary.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Kurt Roeckx
On Fri, Mar 11, 2005 at 04:49:23PM -0500, Bruce Momjian wrote:
> 
> In fact, based on the few complaints we have heard about the current
> situation, I am sure we are going to get many more complaints if we bump
> up the major version in 8.0.2.

I think it's better to have people complain that they should
rebuild than complaining that it's broken.

Note that you actually might force other people (OSs) to do the
transition, not to break their system.  You make it alot
easier for them if you actually do transition yourself so the
sonames do not get out of sync.  It's quite annoying that you do
not bump the soname when you should.


Kurt


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] where is the locale set for each server instance?

2005-03-13 Thread Palle Girgensohn
Hi!
I'm pluggin ICU into PostgreSQL for unicode collation, since FreeBSD has no 
support for unicode collation. It works fine, but I cannot find out where 
to set the default locale for each backend instance. I want to use the 
LC_COLLATE used in initdb, now I've just hard wired it for my own needs. I 
tried backend/access/transam/xlog.c:ReadControlFile, but it is not 
sufficient.

in backend/main/main.c:
/*
 * Set up locale information from environment.  Note that LC_CTYPE and
 * LC_COLLATE will be overridden later from pg_control if we are in an
 * already-initialized database.
So, I'm trying to find out where LC_COLLATE is overridden. Any tips?
Thanks,
Palle
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PERFORM] How to read query plan

2005-03-13 Thread Tom Lane
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
>> Is the data confidential?  If you'd be willing to send me a pg_dump
>> off-list, I'd like to replicate this test and try to see where the time
>> is going.
>> 
> Thank you very much for your offer. The data are partially confidental 
> so I hashed some of the text information and changed some values (not 
> the values for the JOINs) so I could send it to you. I've checked the 
> EXPLAIN ANALYZE if anything changed and the result is merely the same 
> (maybe cca 1 sec slower - maybe because the hash caused the text data to 
> be longer).

No problem; thank you for supplying the test case.  What I find is
rather surprising: most of the runtime can be blamed on disassembling
and reassembling tuples during the join steps.  Here are the hot spots
according to gprof:

---
1.277.388277/103737  ExecScan [16]
2.93   17.02   19092/103737  ExecNestLoop  [14]
3.91   22.70   25456/103737  ExecMergeJoin  [13]
7.81   45.40   50912/103737  ExecHashJoin  [12]
[9] 86.3   15.92   92.50  103737 ExecProject [9]
7.65   76.45 8809835/9143692 ExecEvalVar [10]
3.424.57  103737/103775  heap_formtuple [17]
0.030.24   12726/143737  ExecMakeFunctionResultNoSets 
[24]
0.020.12  103737/290777  ExecStoreTuple [44]
0.010.00   2/2   ExecEvalFunc [372]
0.000.00   2/22  ExecMakeFunctionResult [166]
---
0.000.00  42/9143692 ExecEvalFuncArgs [555]
0.050.51   59067/9143692 ExecHashGetHashValue [32]
0.242.38  274748/9143692 ExecMakeFunctionResultNoSets 
[24]
7.65   76.45 8809835/9143692 ExecProject [9]
[10]69.57.94   79.34 9143692 ExecEvalVar [10]
   79.340.00 8750101/9175517 nocachegetattr [11]
---

I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above).  Because there are lots of variable-width columns
involved, most of the time the fast path for field access doesn't apply
and we end up going to nocachegetattr --- which itself is going to be
slow because it has to scan over so many columns.  So the cost is
roughly O(N^2) in the number of columns.

As a short-term hack, you might be able to improve matters if you can
reorder your LEFT JOINs to have the minimum number of columns
propagating up from the earlier join steps.  In other words make the
later joins add more columns than the earlier, as much as you can.

This is actually good news, because before 8.0 we had much worse
problems than this with extremely wide tuples --- there were O(N^2)
behaviors all over the place due to the old List algorithms.  Neil
Conway's rewrite of the List code got rid of those problems, and now
we can see the places that are left to optimize.  The fact that there
seems to be just one is very nice indeed.

Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
with two input tuples.  Using heap_deformtuple instead of retail
extraction of fields would eliminate the O(N^2) penalty for wide tuples.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] date_trunc problem in HEAD

2005-03-13 Thread Kurt Roeckx
On Sun, Mar 13, 2005 at 12:48:00PM -0700, Robert Creager wrote:
> When grilled further on (Sun, 13 Mar 2005 19:40:02 +0100),
> Kurt Roeckx <[EMAIL PROTECTED]> confessed:
> 
> > > Attached is a patch against HEAD for your review.
> > 
> > It has this comment in it:
> >/* the new year cannot be greater than the
> > * original year, so we subtract one if it is
> > 
> 
> Not doing to well here.  When will the ISO year be greater than the current
> year?  But, what I did is incorrect and 2006-01-01 shows the next problem 
> date:

The iso year can be greater than the current year at the end of
the year and smaller on the start of the year.  You have either
of those at every year change.

> SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2006-01-01' ) AS
> week_trunc;
> 
>  date_trunc_week | week_trunc  
> -+-
>  | 2006-12-25 00:00:00

I expected 2005-12-26 here.

> SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2005-01-01' ) AS
> week_trunc;
>  date_trunc_week | week_trunc  
> -+-
>  | 2005-01-02 00:00:00

That's a higher date, and obviouly looks wrong.  Here I expected
2004-12-27

> The date should be 2005-01-03.  Sigh. Maybe I should of just submitted a bug
> report about it...

That's the next week, and not what I would expect to get as
result.


Kurt


---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] materialized views

2005-03-13 Thread Oleg Bartunov
Hi there,
I read Jonathan Gardner's 
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
and wondering if there some works on mat.views ?

I found academic project http://research.csc.ncsu.edu/selftune/
which has implementation of matviews in postgresql 7.3.4.
Code is available under BSD license.
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Andrew - Supernews
On 2005-03-13, Bruce Momjian  wrote:
> Andrew - Supernews wrote:
>> On 2005-03-11, Bruce Momjian  wrote:
>> > I can think of no way to prevent it, except on Win32 that has an exports
>> > file.
>> 
>> At least a significant minority, if not an actual majority, of Unix
>> platforms do allow this; including (to my knowledge) Solaris, AIX and
>> everything using ELF with GNU ld (therefore including Linux and FreeBSD).
>
> OK, how is this done with ELF?

Using version commands in a linker script file.

The minimal example looks about like this (for GNU ld, for Solaris ld
leave off the outer VERSION { } wrapper):

VERSION {
{
global: foo; bar; baz;
local: *;
};
}

This makes all symbols other than "foo", "bar" and "baz" into local symbols,
not visible outside the library.

This much of the Solaris-style symbol versioning support is handled entirely
in the linker and thus should work on any system with ELF and GNU ld, or on
Solaris with the native linker. (The rest requires runtime support, which
probably doesn't exist on platforms other than Solaris.)

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Cost of XLogInsert CRC calculations

2005-03-13 Thread Simon Riggs
On Fri, 2005-03-11 at 19:31 +0100, Hans-JÃrgen SchÃnig wrote:
> > One of the things I was thinking about was whether we could use up those
> > cycles more effectively. If we were to include a compression routine
> > before we calculated the CRC that would 
> > - reduce the size of the blocks to be written, hence reduce size of xlog
> > - reduce the following CRC calculation
> > 
> > I was thinking about using a simple run-length encoding to massively
> > shrink half-empty blocks with lots of zero padding, but we've already
> > got code to LZW the data down also.

> I think having a compression routine in there could make real sense.
> We have done some major I/O testing involving compression for a large 
> customer some time ago. We have seen that compressing / decompressing on 
> the fly is in MOST cases much faster than uncompressed I/O (try a simple 
> "cat file | ..." vs." zcat file.gz | ...") - the zcat version will be 
> faster on all platforms we have tried (Linux, AIX, Sun on some SAN 
> system, etc. ...).
> Also, when building up a large database within one transaction the xlog 
> will eat a lot of storage - this can be quite annoying when you have to 
> deal with a lot of data).

Agreed.

> Are there any technical reasons which would prevent somebody from 
> implementing compression?

Not currently, that I'm aware of. But the way additional blocks are
added to xlog records would need to be changed to allow for variable
length output. 

It's on my list...

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] date_trunc problem in HEAD

2005-03-13 Thread Robert Creager
When grilled further on (Sun, 13 Mar 2005 19:40:02 +0100),
Kurt Roeckx <[EMAIL PROTECTED]> confessed:

> > Attached is a patch against HEAD for your review.
> 
> It has this comment in it:
>/* the new year cannot be greater than the
> * original year, so we subtract one if it is
> 

Not doing to well here.  When will the ISO year be greater than the current
year?  But, what I did is incorrect and 2006-01-01 shows the next problem date:

SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2006-01-01' ) AS
week_trunc;

 date_trunc_week | week_trunc  
-+-
 | 2006-12-25 00:00:00

Heck, even what I submitted, test and all is wrong:

SELECT '' AS date_trunc_week, date_trunc( 'week', timestamp '2005-01-01' ) AS
week_trunc;
 date_trunc_week | week_trunc  
-+-
 | 2005-01-02 00:00:00

The date should be 2005-01-03.  Sigh. Maybe I should of just submitted a bug
report about it...

So, unless someone else knows how to do this correctly, I'll have to actually
think about it.

Cheers,
Rob

-- 
 12:34:02 up 47 days, 17:50,  4 users,  load average: 2.34, 2.60, 2.55
Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004


pgpsLcfsPEZ1Y.pgp
Description: PGP signature


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Bruce Momjian
Andrew - Supernews wrote:
> On 2005-03-11, Bruce Momjian  wrote:
> > Kurt Roeckx wrote:
> >> I assume libpq is staticly linked to pgport and is exporting
> >> symbols it shouldn't.  Can we prevent it from exporting those
> >> symbols?
> >
> > I can think of no way to prevent it, except on Win32 that has an exports
> > file.
> 
> At least a significant minority, if not an actual majority, of Unix
> platforms do allow this; including (to my knowledge) Solaris, AIX and
> everything using ELF with GNU ld (therefore including Linux and FreeBSD).

OK, how is this done with ELF?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] TODO item: support triggers on columns

2005-03-13 Thread Tom Lane
"Greg Sabino Mullane" <[EMAIL PROTECTED]> writes:
> By the way, does anyone know what the tgattr field in pg_triggers
> is for?

http://developer.postgresql.org/docs/postgres/catalog-pg-trigger.html
says "currently unused" and a desultory search through the sources
confirms that.  I imagine it was put there with the intent of adding
column info later on.  However, being a fixed-width array, I don't
actually see that it would be useful for anything much ... you'd at
least want to change it to a normal array.

BTW, if you don't have a convenient way of grepping the entire PG
source tree and quickly viewing all the hits on a particular symbol,
I *highly* recommend setting up something that can do that.  I use
emacs + glimpse but there are probably newer tools out there.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] date_trunc problem in HEAD

2005-03-13 Thread Kurt Roeckx
On Sun, Mar 13, 2005 at 11:12:32AM -0700, Robert Creager wrote:
> 
> Hey All,
> 
> I goofed with the patch I submitted last year for adding 'week' capability to
> the date_trunc function.
> 
> Attached is a patch against HEAD for your review.

It has this comment in it:
   /* the new year cannot be greater than the
* original year, so we subtract one if it is

Can you please explain that?

The "iso" year can be both greater and smaller than the current
year.


Kurt


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] date_trunc problem in HEAD

2005-03-13 Thread Robert Creager

Hey All,

I goofed with the patch I submitted last year for adding 'week' capability to
the date_trunc function.

Attached is a patch against HEAD for your review.

Cheers,
Rob

-- 
 11:00:49 up 47 days, 16:17,  4 users,  load average: 3.01, 2.37, 2.37
Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004


date_trunc.patch
Description: Binary data


pgpMzsnWN5kpL.pgp
Description: PGP signature


Re: [HACKERS] TODO item: support triggers on columns

2005-03-13 Thread Chris Mair

> > I'd like to start working on the following TODO item:
> > Referential Integrity / Support triggers on columns
> >
> > Is somebody else already working on this?
> 
> Sorry for not jumping in earlier. As Rob said, I am working on
> column-level support for triggers. I did not have my name addded
> to the TODO list as I wanted to at least get a proof of concept
> going first to make sure I could undertake it. I am partway there,
> but there are still a lot of rough edges. I guess at this point
> I should formally put my name on the TODO, and can bail out if
> it gets over my head?

Ok!
Keep us up to date.

Bye, Chris :)


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Strange postgres planner behaviour

2005-03-13 Thread Oleg Bartunov
On Sat, 12 Mar 2005, Tom Lane wrote:
"Sergey E. Koposov" <[EMAIL PROTECTED]> writes:
I want to descibe some strange behaviour of the postgres planner.
It's not strange exactly: the mechanism for OR indexscan and the
mechanism for nestloop join indexscan are separate and don't talk
to each other.  So you don't get to have a join inner indexscan that
involves an OR condition.
I have some vague ideas about replacing orindxpath.c entirely, once
we have some infrastructure for doing OR indexscans via bitmap union.
But it's not just around the corner.
for 8.1, probably ?
In the meantime you might try expressing your query as a UNION.
Hmm, I'm wondering if the table will be reades as much as the number 
of UNIONs or there is some optimization ?

regards, tom lane
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [HACKERS] TODO item: support triggers on columns

2005-03-13 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


> I'd like to start working on the following TODO item:
> Referential Integrity / Support triggers on columns
>
> Is somebody else already working on this?

Sorry for not jumping in earlier. As Rob said, I am working on
column-level support for triggers. I did not have my name addded
to the TODO list as I wanted to at least get a proof of concept
going first to make sure I could undertake it. I am partway there,
but there are still a lot of rough edges. I guess at this point
I should formally put my name on the TODO, and can bail out if
it gets over my head?

By the way, does anyone know what the tgattr field in pg_triggers
is for? It's allocated but never populated, and would be perfect
for my purposes on this patch, but I don't want to use it if it is
being reserved for something else.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200503121915
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFCM4ZwvJuQZxSWSsgRAhesAKCXi468EyjJ77yMW83Zuoy9glm6XACgxBod
3PZ+l26fRrPY5glMpY+6gxM=
=P+cR
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] REL8_0_STABLE and 8.0.1 release client logging difference

2005-03-13 Thread Oleg Bartunov
On Sun, 13 Mar 2005, Andrew Dunstan wrote:
Oleg Bartunov said:
On Sat, 12 Mar 2005, Tom Lane wrote:
Oleg Bartunov  writes:
REL8_0_STABLE:
tycho=# select * from pg_stas where srelnae='tycho';
tycho=# \q
Works fine for me in REL8_0_STABLE tip ... and it's working fine on
all the build farm machines too, because this would surely cause all
the regression tests to fail.  Sure you didn't mistakenly change
client_min_messages ?
I found the problem ! It's  --with-ssl support I tried in
REL8_0_STABLE. After recompiling REL8_0_STABLE without ssl everything
works fine.
I didn't check 8.0.1 release with ssl, but at work I have no problem.

I take it you mean "--with-openssl". If so, Tom's objection still applies -
sure, "--with-openssl".
see for example
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dog&dt=2005-03-12%2017:52:00which
 is the log of a build of the REL8_0_STABLE branch with openssl.
Please tell us more about the platform you are using, if this is still a
problem.
I solved my problem ! It was  a result of my glibc upgrade and I had to 
reinstall openssl libraries and recompile postgresql.


cheers
andrew

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Bumping libpq version number?

2005-03-13 Thread Andrew - Supernews
On 2005-03-11, Bruce Momjian  wrote:
> Kurt Roeckx wrote:
>> I assume libpq is staticly linked to pgport and is exporting
>> symbols it shouldn't.  Can we prevent it from exporting those
>> symbols?
>
> I can think of no way to prevent it, except on Win32 that has an exports
> file.

At least a significant minority, if not an actual majority, of Unix
platforms do allow this; including (to my knowledge) Solaris, AIX and
everything using ELF with GNU ld (therefore including Linux and FreeBSD).

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Question about encoding combinations

2005-03-13 Thread Peter Eisentraut
Bruce Momjian wrote:
> Why is BIG5 listed as not allowing UTF8 on the client, but you can
> have UTF8 on the server and BIG5 on the client?

Because BIG5 is a client-only encoding.

> Why can't you have UTF8 on the server and client?

Sure you can.

> Why can't you have MULE_INTERNAL on the server and client?

I think it should work, although I have no experience with that 
encoding.

> Why can't you have UTF8 on the server and SQL_ASCII on the client?

Sure you can, but it doesn't make much sense, because SQL_ASCII is not 
really an encoding.

> Since they all support UTF8, why can't we just allow any
> server/client combination?

Because not all encodings can encode all characters.  UTF8 doesn't help 
that.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] REL8_0_STABLE and 8.0.1 release client logging difference

2005-03-13 Thread Andrew Dunstan
Oleg Bartunov said:
> On Sat, 12 Mar 2005, Tom Lane wrote:
>
>> Oleg Bartunov  writes:
>>> REL8_0_STABLE:
>>> tycho=# select * from pg_stas where srelnae='tycho';
>>> tycho=# \q
>>
>> Works fine for me in REL8_0_STABLE tip ... and it's working fine on
>> all the build farm machines too, because this would surely cause all
>> the regression tests to fail.  Sure you didn't mistakenly change
>> client_min_messages ?
>
> I found the problem ! It's  --with-ssl support I tried in
> REL8_0_STABLE. After recompiling REL8_0_STABLE without ssl everything
> works fine.
>
> I didn't check 8.0.1 release with ssl, but at work I have no problem.
>
>

I take it you mean "--with-openssl". If so, Tom's objection still applies -
see for example
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dog&dt=2005-03-12%2017:52:00which
 is the log of a build of the REL8_0_STABLE branch with openssl.

Please tell us more about the platform you are using, if this is still a
problem.

cheers

andrew



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] REL8_0_STABLE and 8.0.1 release client logging difference

2005-03-13 Thread Oleg Bartunov
On Sat, 12 Mar 2005, Tom Lane wrote:
Oleg Bartunov  writes:
REL8_0_STABLE:
tycho=# select * from pg_stas where srelnae='tycho';
tycho=# \q
Works fine for me in REL8_0_STABLE tip ... and it's working fine on all
the build farm machines too, because this would surely cause all the
regression tests to fail.  Sure you didn't mistakenly change
client_min_messages ?
I found the problem ! It's  --with-ssl support I tried in REL8_0_STABLE.
After recompiling REL8_0_STABLE without ssl everything works fine.
I didn't check 8.0.1 release with ssl, but at work I have no problem.

regards, tom lane
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq