Bruce Momjian wrote:
KRB4 removal patch (Magnus)
This is done.
There are also two PL/PgSQL patches from Pavel that need to be reviewed
and applied:
http://archives.postgresql.org/pgsql-hackers/2005-06/msg01202.php
http://archives.postgresql.org/pgsql-patches/2005-06/msg00475.php
I'll
Alfranio Correia Junior wrote:
I think it is ok now.
However, I corrected the indentation manually.
I could not run some of the tools, namely the entab.
/usr/lib/gcc-lib/i386-redhat-linux/3.3.3/include/varargs.h:4:2: #error
GCC no longer implements varargs.h.
Jan Wieck wrote:
But what if they decide to allow
LOOP
-- ...
IF condition THEN
EXIT;
END LOOP;
at some point? There you'd get ambiguity.
ISTM this would be ambiguous in any case:
IF condition1 THEN
foo;
IF condition2 THEN
bar;
END IF;
-Neil
Michael Fuhr wrote:
I'm getting CONTINUE cannot be used outside a loop errors even
though it's inside a loop. The error appears to be happening when
CONTINUE passes control to the beginning of the loop but there's
no more iterating to be done.
Woops, sorry for missing this. This should be
Andrew Dunstan wrote:
But this doesn't make it easier to use - users don't just include those who
write it. The antecedent language of these, Ada, from which this syntax
comes, was explicitly designed to be reader-friendly as opposed to
writer-friendly, and this is a part of that.
IMHO it is
In PL/PgSQL, END LOOP is used to terminate loop blocks, and END IF
is used to terminate IF blocks. This is needlessly verbose: we could
simply accept END in both cases without syntactic ambiguity. I'd like
to make this change, so that END can be used to terminate any kind of
block. There's no
Andrew Dunstan wrote:
I'm unkeen. I see no technical advantage - it's just a matter of taste.
There is no technical advantage to case insensitive keywords, or
dollar quoting, or a variety of other programming language features that
don't change functionality but exist to make using the
Tom Lane wrote:
The long-term point in my mind is that removing syntactical
redundancy always reduces the ability to detect errors or report
errors acccurately
Lexical scoping is unambiguous in a language like PL/PgSQL. Since it is
simple to determine whether a given END matches an IF, LOOP,
Bruce Momjian wrote:
We have addressed all the open issues for 8.1 except for auto-vacuum,
which Alvaro is working on, so I think we are ready for a feature freeze
on July 1.
It would be nice to upgrade to autoconf 2.59 before the freeze (although
it would probably be okay to do this
Pavel Stehule wrote:
DECLARE excpt EXCEPTION [= 'SQLSTATE']
What would this default to? (i.e. if no '= SQLSTATE' is specified)
Rules:
o User can specify SQLSTATE only from class 'U1'
o Default values for SQLSTATE usr excpt are from class 'U0'
Can you elaborate on what you mean?
o
Alvaro Herrera wrote:
One issue I do have to deal with right now is how many autovacuum
processes do we want to be running. The current approach is to have one
autovacuum process. Two possible options would be to have one per
database, and one per tablespace. What do people think?
Why do we
Josh Berkus wrote:
Not that I don't agree that we need a less I/O intense alternative to VACUUM,
but it seems unlikely that we could actually do this, or even agree on a
spec, before feature freeze.
I don't see the need to rush anything in before the feature freeze.
Wheras integrated AV is
Qingqing Zhou wrote:
The start/stop routine is quite like Bgwriter. It requires PgStats to be
turned on.
Wasn't the plan to rewrite pg_autovacuum to use the FSM rather than the
stats collector?
-Neil
---(end of broadcast)---
TIP 3: if
Tom Lane wrote:
Hmm. Maybe we need something more like a lint check for tables, ie
run through and look for visibly corrupt data, such as obviously
impossible lengths for varlena fields.
Come to think of it, didn't someone already write something close to
this a few years ago?
Sounds like
Someone commented to me recently that they usually use psql's \x
expanded output mode, but find that it produces pretty illegible
results for psql slash commands such as \d. I can't really see a reason
you would _want_ expanded output mode for the result sets of psql
slash commands. Would
Tom Lane wrote:
That's been the intention for a very long time: everything in the core
tarball should be under the same license. Someone's got to do the
legwork of contacting the module authors involved to see if they're
willing to relicense ... and so far it just hasn't gotten to the top
of
vamsi krishna wrote:
hi,
i want to know how CREATE table (creating a
relation)
See DefineRelation() in backend/commands/tablecmds.c, and the routines
it calls.
also i want to know how postgres parser the
input(create table) and how is this connected to the
create table source code
On Wed, 2005-06-01 at 09:30 +0800, Christopher Kings-Lynne wrote:
This whole GiST concurrency think really needs to be looked at :(
I spent some time looking at it toward the end of last year, but
unfortunately I didn't have enough time to devote to it to get a working
implementation (it is
On Wed, 2005-06-01 at 00:40 -0400, Alvaro Herrera wrote:
This doesn't work for COPY, but maybe for CREATE TABLE AS we could log
the fact that the command was executed, so the replayer could execute
the same command again.
Of course, this handwaving doesn't explain how the system in recovery
On Wed, 2005-03-23 at 10:04 -0500, Tom Lane wrote:
I think last night's discussion makes it crystal-clear why I felt that
this hasn't been sufficiently thought through. Please revert until the
discussion comes to a conclusion.
Are there any remaining objections to reapplying this patch?
The
Tom Lane wrote:
Because (a) it needs all the same arguments
Well, it needs the Trigger that we're in the process of queueing, the
old tuple, the new tuple, and the updated relation. It doesn't need the
rest of the content of TriggerData. trigger.c has to manually construct
a TriggerData to
I spent a little while looking into a performance issue with a large
UPDATE on a table with foreign keys. A few questions:
(1) When a PK table is updated, we skip firing the per-row UPDATE RI
triggers if none of the referenced columns in the PK table have been
modified. However, AFAICS we do
Stephan Szabo wrote:
Are you sure? RI_FKey_Check seems to have a section on
TRIGGER_FIRED_BY_UPDATE which seems to check if the keys are equal if the
old row wasn't part of this transaction.
Well, regardless of how RI_FKey_Check() itself works, ISTM there is no
need to enqueue the RI trigger
Tom Lane wrote:
But the check could incorporate the same transaction ID test already
in use. I think Neil is right that it'd be a win to apply the test
before enqueueing the trigger instead of after.
Speaking of which, does anyone see a reason why RI_FKey_keyequal_upd()
is implemented as a
Tom Lane wrote:
Dunno. Depending on such a thing would require depending on a new flex
version, and seeing that the flex guys haven't put out a new release
since the badly broken 2.5.31 more than 2 years ago, I wouldn't hold
my breath waiting for one we can use.
It should be easy enough to
Bruce Momjian wrote:
We considered putting XML in psql or libpq in the past, but the problem
is that interfaces like jdbc couldn't take advantage of it.
Well, you could implement it as a C UDF and use SPI. Or write it as a C
client library, and use JNI. Or just provide a Java implementation --
Sergey Ten wrote:
We think that putting it in the backend will make access from other
components easier.
In what way?
It seems to me that this can be done just as easily in a client
application / library, without cluttering the backend with yet another
COPY output format. It would also avoid the
Tom Lane wrote:
We did do that (not very rigorously) during the 7.4 release cycle.
I'm not sure why we fell out of the habit again for 8.0. It seems
like a reasonable idea to me.
In the past I have suggested incrementally maintaining release.sgml (or
some plaintext version of it), rather than
Brendan Jurd wrote:
What's the basis of this objection to a web-based dev management
system?
Beyond the core developers want to stick to email, I think there is a
good reason that we should stick primarily to email for project
management: Bugzilla and similar systems are point to point, whereas
Joshua D. Drake wrote:
You can even respond to specific messages within the thread instead of
just a top down (one email after the other).
Well, that seems pretty fundamental...
But the point is that the current system works well;
Well does it though? I am not saying it is bad, well yes I am ;).
Sergey Ten wrote:
After a careful consideration we decided to
- put XML implementation in the backend
What advantage does putting the XML output mode in the backend provide?
-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend
Jeffrey Baker wrote:
Would you take a patch that retained the optimized executions of plans
returning 1 tuple and also fixed the random heap problem?
Can you elaborate on what you're proposing? Obviously sorted b+-tree
output is important for a lot more than just min()/max(). I don't see an
David Fetter wrote:
EXECUTE IMMEDIATE $$
function body here
$$
LANGUAGE plfoo;
Seems like a lot of unnecessary syntax for something that would be
manually used by a lot of DBAs. Also, this is unrelated to normal
EXECUTE, or the EXECUTE IMMEDIATE defined by the standard, so I'm not
sure it's
Oleg Bartunov wrote:
I just talked with AMD Russia and they could provide almost
any hw for testing, so I'm looking for test-suite for PostgreSQL.
Do we have sort of official tests ?
I guess the regression tests would be the closest to an official set of
tests, but they obviously aren't perfect.
Tom Lane wrote:
Yeah, we will. Please file a bugzilla entry for this though --- I
concur that it is a linker bug.
Okay, patch reverted. The RH bug is here:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=157126
-Neil
---(end of
Simon Riggs wrote:
I support Andrew's comment, though might reword it to
Don't enable anything that gives users programmable features or user
exits by default.
Users can already define SQL functions by default, which certainly
provides programmable features. I'm not quite sure what you mean by
Andrew Sullivan wrote:
This is not really analogous, because those are already on
Which is my point: you're suggesting we retrofit a security policy onto
PG that does not apply to the vast majority of the base system -- and
that if applied would require fundamental changes.
Indeed. But that
Mike Mascari wrote:
People who use views to achieve row security, which is a rather common
paradigm, cannot allow users to create functions with side effects.
Can you elaborate? I'm not sure I follow you.
(I'll note anyway that (1) SQL functions can have side effects: CREATE
FUNCTION foo()
Mike Mascari wrote:
Correct, as the vulnerability exists within the 'SQL' language as well.
The only difference is that enabling plpgsql by default changes it from
a leak to a full blown flood.
How does it make any difference at all?
-Neil
---(end of
The recent ld --as-needed patch seems to break psql on FC3: it fails to
start with symbol lookup error: /usr/lib64/libreadline.so.4: undefined
symbol: BC. You can see this on the FC3 build farm machine:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=viperdt=2005-05-06%2004:47:01
My current
Tom Lane wrote:
The denial of service risk in particular (whether intentional or
accidental) goes way up.
Does it really go way up? A malicious user who can execute SQL can DOS
the database trivially. Doing the (non-trivial) infrastructure work to
fix that is probably a good idea, but I don't
Andrew Sullivan wrote:
Sure it is. Don't enable anything you don't need, is the first
security rule. Everything is turned off by default. If you want it,
enable it.
So would you have us disable all the non-essential builtin functions?
(Many of which have has security problems in the past.)
Is there a good reason that pl/pgsql is not installed in databases by
default?
I think it should be. pl/pgsql is widely used, and having it installed
by default would be one less hurdle for newbies to overcome when
learning PostgreSQL. It would also make it easier to distribute
applications
Christopher Kings-Lynne wrote:
Problem is people restoring dumps that have the plpgsql create language,
etc. commands in them.
It should be possible to ignore those commands, and possibly issue a
warning. It's a bit ugly, but at least we can detect this situation
pretty unambiguously.
-Neil
Josh Berkus wrote:
The only one I can think of is security, which is pretty weak -- we've never
had a plpgsql security issue that I know of.
Well, no -- for instance,
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2005-0245
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2005-0247
But I
Dag-Erling Smørgrav wrote:
It doesn't stress the system anywhere near enough to reveal bugs in,
say, the shared memory or semaphore code.
I agree -- I think we definitely need more tests for the concurrent
behavior of the system.
-Neil
---(end of
Tom Lane wrote:
#3 Defend against client holding locks unreasonably long, even though
not idle
I can't get too excited about this case. If the client is malicious,
this feature is surely insufficient to stop them from consuming a lot of
resources (for example, they could easily drop and
Thomas Hallgren wrote:
Tom Lane wrote:
Furthermore, we have never promised ABI-level compatibility across
versions inside the backend, and we are quite unlikely to make such
a promise in the foreseeable future.
I know that no promises has been made but PostgreSQL is improved every
day and this
Marc G. Fournier wrote:
Agreed ... if someone can make the project, I can move the CVS files
over ... does anyone know who is currently maintaining it though?
A little research would reveal:
% head contrib/dbmirror/README.dbmirror
DBMirror - PostgreSQL Database Mirroring
Tzahi Fadida wrote:
I think the solution can be either changing the FETCH_ALL to
INT_MAX or changing the interface parameter count and subsequent usages
to long.
I think changing SPI_cursor_fetch() and SPI_cursor_move() to take a
long for the count parameter is the right fix for HEAD. It would
Neil Conway wrote:
I think changing SPI_cursor_fetch() and SPI_cursor_move() to take a
long for the count parameter is the right fix for HEAD.
Attached is a patch that implements this. A bunch of functions had to be
updated: SPI_execute(), SPI_execute_snapshot(), SPI_exec(), SPI_execp
Thomas Hallgren wrote:
Since both int and long are types whos size that vary depending on
platform, and since the SPI protocol often interfaces with other
languages where the sizes are fixed
ISTM there are no languages where the sizes are fixed. In this
context, int and long are C and C++
Thomas Hallgren wrote:
What I meant was that SPI will interface with languages where there is
no correspondence to a type who's size varies depending on platform and
that it therefore would be better to chose a type who's size will not vary.
My point is that since they are different types, the
[EMAIL PROTECTED] wrote:
statement_timeout is not a solution if many processes are
waiting the resource.
Why not?
I think the only problem with using statement_timeout for this purpose
is that the client connection might die during a long-running
transaction at a point when no statement is
Oliver Jowett wrote:
I raised this a while back on -hackers:
http://archives.postgresql.org/pgsql-hackers/2005-02/msg00397.php
but did not get much feedback.
Perhaps you can interpret silence as consent? :)
Does anyone have comments on that email?
I wouldn't be opposed to it. It would be
Tom Lane wrote:
We would? Why? Please provide a motivation that justifies the
considerably higher cost to make it count that way, as opposed to
time-since-BEGIN.
The specific scenario this feature is intended to resolve is
idle-in-transaction backends holding on to resources while the network
Alvaro Herrera wrote:
BTW, why not get rid of src/corba?
Good question; I'll remove it from HEAD tomorrow, barring any objections.
-Neil
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Christopher Kings-Lynne wrote:
What's the point of keeping such backend development discussion separate
from the -hackers list? It's always been a mistake in the past...
Yeah, it struck me as a bad idea as well.
-Neil
---(end of broadcast)---
TIP
Josh Berkus wrote:
Oh, and incidentally, can I use the same database files for 8.0.2 and 8.1cvs
3/10/05?
No.
-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL
Tom Lane wrote:
Specifically, I'm imagining that we could convert
SELECT min(x), max(y) FROM tab WHERE ...
into sub-selects in a one-row outer query:
SELECT (SELECT x FROM tab WHERE ... ORDER BY x LIMIT 1),
(SELECT y FROM tab WHERE ... ORDER BY y DESC LIMIT 1);
Does
Tom Lane wrote:
All that this optimization might do is to further cut the fraction of
table rows at which the volatile function actually gets checked. So
I'm not seeing that it would break any code that worked reliably before.
Hmm; what about
SELECT min(x), min(x) FROM tab WHERE random()
Neil Conway wrote:
I'd like to make add_missing_from=false the default for 8.1. Any
objections?
FYI, I've applied a patch that makes this change to CVS HEAD.
-Neil
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
Dave Page wrote:
Magnus I have been looking out for the official 8.0.2 beta
announcement before we announce the win32 binaries that were built last
week. Did we miss it (I can't see anythign in the archives), or have
plans changed?
http://archives.postgresql.org/pgsql-general/2005-03/msg01311.php
Christopher Kings-Lynne wrote:
I think he has a really excellent point. It should log the parameters
as well.
neilc=# prepare foo(int, int) as select $1 + $2;
PREPARE
neilc=# execute foo(5, 10);
...
neilc=# execute foo(15, 20);
...
% tail /usr/local/pgsql/postmaster.log
LOG: statement: prepare
Oliver Jowett wrote:
Query-level EXECUTE is logged, but Bind/Execute via the V3 extended
query protocol (which is what the JDBC driver does) isn't.
Ah, I see. Yes, that certainly needs to be fixed.
-Neil
---(end of broadcast)---
TIP 8: explain
Tom Lane wrote:
Huh? DELETE hasn't got a tlist to transform ...
Yeah -- on looking closer, the patch copied and pasted a bunch of tlist
transformation code from UPDATE, but AFAICS there is no need for it.
-Neil
---(end of broadcast)---
TIP 7:
[ CC'ing hackers to see if anyone else wants to weigh in ]
Tom Lane wrote:
Of course, the entire reason this didn't happen years ago is that we
couldn't agree on what keyword to use... you sure you want to reopen
that discussion?
Sure, it doesn't seem too difficult to settle to me.
I don't think
Bruce Momjian wrote:
Wow, seems I lost that somehow.
BTW, I personally think it is fine for patch submitters to send ping
mails if your patch is not applied or reviewed within a reasonable
period of time -- this is standard practice among the GCC community,
for example. I certainly have a
Euler Taveira de Oliveira wrote:
Are you talking about DELETE FROM bar USING foo ? I submitted a patch
some months ago.
At a quick glance, looks pretty good. It needs regression tests, and I'd
also like to refactor the analyze.c additions to use the same code
UPDATE uses for the tlist
Is there a reason why the implementation of hash joins uses a separate
hash child node? AFAICS that node is only used in hash joins. Perhaps
the intent was to be able to provide a generic hashing capability that
could be used by any part of the executor that needs to hash tuples, but
AFAICS
Tom Lane wrote:
One small objection is that we'd lose the ability to separately display
the time spent building the hash table in EXPLAIN ANALYZE output. It's
probably not super important, but might be a reason to keep two plan
nodes in the tree.
Hmm, true. Perhaps then just hacking the hash node
Alvaro Herrera wrote:
On Mon, Mar 28, 2005 at 12:27:18PM +0500, imad wrote:
I want to know is there any way to execute an anonymous PL/pgSQL block
in PostgreSQL.
No, there isn't.
It might be possible to implement at least some of this functionality
entirely in the client. So:
BLOCK;
/* your
Oleg Bartunov wrote:
oh, no. I need 2 numbers only :) What's \timing in psql does ?
Or it's just a wrapper to system 'time' ?
It computes wall-clock time via gettimeofday(), and is entirely
implemented on the client-side.
-Neil
---(end of
Tom Lane wrote:
I think last night's discussion makes it crystal-clear why I felt that
this hasn't been sufficiently thought through. Please revert until the
discussion comes to a conclusion.
I applied the patch because I don't think it is very closely related to
the discussion. But if you'd
Tom Lane wrote:
I agree that we aren't MVCC with respect to DDL operations (and for this
purpose CLUSTER is DDL). Trying to become so would open a can of worms
far larger than it's worth, though, IMHO.
I think if we can come up with a reasonable way to handle all the
consequences, it's worth
Neil Conway wrote:
AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and
CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are
adding triggers to (the PK table, in the case of ALTER TABLE). Is this
necessary? I don't see why we can't allow SELECT queries
Christopher Kings-Lynne wrote:
If you want to be my friend forever, then fix CLUSTER so that it uses
sharerowexclusive as well :D
Hmm, this might be possible as well. During a CLUSTER, we currently
- lock the heap relation with AccessExclusiveLock
- lock the index we're clustering on with
Bruce Momjian wrote:
Certainly we need to upgrade to an exclusive table lock to replace the
heap table.
Well, we will be holding an ExclusiveLock on the heap relation
regardless. We replace the heap table by swapping its relfilenode, so
ISTM we needn't hold an AccessExclusiveLock.
Do we want to
Neil Conway wrote:
... except that when we rebuild the relation's indexes, we acquire an
AccessExclusiveLock on the index. This would introduce the risk of
deadlock. It seems necessary to acquire an AccessExclusiveLock when
rebuilding shared indexes, since we do the index build in-place, but I
Tom Lane wrote:
Utterly wrong. When you commit you will physically drop the old table.
If there is a SELECT running against the old table it will be quite
unhappy after that.
How can we drop the file at commit, given that a serializable
transaction's snapshot should still be able to see old
Tom Lane wrote:
It isn't 100% MVCC, I agree. But it works because system catalog
lookups are SnapshotNow, and so when another session comes and wants to
look at the table it will see the committed new version of the pg_class
row pointing at the new relfilenode file.
If by works, you mean provides
Tom Lane wrote:
This is presuming that we abandon the notion that system catalog
access use SnapshotNow. Which opens the question of what they should
use instead ... to which transaction snapshot isn't the answer,
because we have to be able to do system catalog accesses before
we've set the
Tom Lane wrote:
I don't think this has been adequately thought through at all ... but
at least make it ExclusiveLock. What is the use-case for allowing
SELECT FOR UPDATE in parallel with this?
Ok, patch applied -- I adjusted it to use ExclusiveLock, and fleshed out
some of the comments.
-Neil
AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and
CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are
adding triggers to (the PK table, in the case of ALTER TABLE). Is this
necessary? I don't see why we can't allow SELECT queries on the table to
proceed
Neil Conway wrote:
AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and
CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are
adding triggers to (the PK table, in the case of ALTER TABLE). Is this
necessary? I don't see why we can't allow SELECT queries
Tom Lane wrote:
I'd go with PlannerState. QueryState for some reason sounds more like
execution-time state.
Well, not to me :) It just makes sense to me that QueryState as the
working state associated with a Query. Not sure it makes a big
difference, though.
Pulling the planner internal stuff
Tom Lane wrote:
That's a bit nasty. I'm fairly sure that I added in_info_list to the
walker recursion because I had to; I don't recall the exact scenario,
but I think it needs to be possible to reassign relation numbers
within that data structure if we are doing it elsewhere in a query
tree.
It
Oliver Jowett wrote:
What happens if (for example) DateStyle changes between the two parses?
From my original email:
This is the common case of a more general problem: a query plan depends
on various parts of the environment at plan-creation time. That
environment includes the definitions of
Tom Lane wrote:
It is well defined, because we insist that the gram.y transformation not
depend on any changeable state.
That's my point -- whether we begin from the query string or the raw
parsetree shouldn't make a difference. By not well-defined, I meant that
if the user is changing GUC
I've been taking a look at how to stop the planner from scribbling on
its input. This is my first modification of any significance to the
planner, so don't hesitate to tell me what I've gotten wrong :)
I think the planner makes two kinds of modifications to the input Query:
(a) rewriting of
Neil Conway wrote:
(b) should be pretty easy to solve; we can create a per-Query PlanState
struct that contains this information, as well as holding a pointer to
the Query (and perhaps the in-construct Plan tree).
I just noticed that there is a `PlanState' node in the executor, of all
places
Qingqing Zhou wrote:
So is this change the preparation work of caching query plans? Like cleaning
the plans so they could be well shared?
Yeah, it is somewhat related to the centralized plan caching module that
Tom and I have been discussing in the cached plan invalidation thread.
When a cached
Tom Lane wrote:
I don't believe there is any very significant amount of planner work
that is completely independent of any external database object. For
that matter, even the rewriter needs to be rerun when any views or
defaults change in the query. And for that matter, even the parse
analysis
. Patch from Koju
Iijima, reviewed by Neil Conway. Catalog version number bumped,
regression tests updated.
Yes, as well as 4 other patches that have bumped the catversion number.
I think we are well past the point at which an 8.1 without an initdb
would be a plausible option (barring
Neil Conway wrote:
Do we want to share plans between call sites?
After thinking about this a little more, I think the answer is no --
it doesn't really buy us much, and introduces some extra complications
(e.g. resource management).
BTW, it's quite annoying that the planner scribbles on its
Tom Lane wrote:
Um. Shouldn't that whole file be #ifndef EXEC_BACKEND?
Woops, sorry about that.
We can't make the file #ifndef EXEC_BACKEND since fork_process() is used
by the Unix implementation of internal_forkexec(), but #ifndef WIN32
should work. I've applied the attached patch to HEAD.
[EMAIL PROTECTED] wrote:
The point is that this *is* silly, but I am at a loss to understand why it
isn't a no-brainer to change. Why is there a fight over a trivial change
which will ensure that PostgreSQL aligns to the documented behavior of
open()
(Why characterise this as a fight, rather than
Bruce Momjian wrote:
One idea would be to record if the function uses non-temp tables, temp
tables, or both, and invalidate based on the type of table being
invalidated, rather than the table name itself. I can imagine this
hurting temp table caching, but at least functions using regular tables
Qingqing Zhou wrote:
Second (as Tom says), some changes can hardly be traced. For example, we
only use function A. But function A cites function B, function B cites
function C. when C changes, how do we know that we should worry about our
plan?
I don't see that this is a major problem. If a plan
Oliver Jowett wrote:
Does this mean that clients that use PREPARE/Parse need to handle plan
invalidated as a possible response to EXECUTE/Bind, or will the backend
keep the query string / parse tree around and replan on next execution?
The latter -- the client won't be aware that replanning took
301 - 400 of 1081 matches
Mail list logo