I'm still unclear on exactly what your needs are. In the first place,
are you expecting to obtain data from arbitrary SELECT statements, or
only from statements of the form SELECT * FROM single_table? You've
also been confusing as to whether you want transparency of views (ie,
does a
2. Error out only if a backwards fetch is actually attempted on a plan
tree that can't handle it (which could only happen if SCROLL wasn't
given). This is efficient and flexible, but it exposes implementation
details to the user, in that whether an error occurs will depend on
which plan the
FETCH LAST should return the last one row.
FETCH RELATIVE m should return a row after skipping
m rows if we follow the SQL standard and so the current
implementation of FETCH RELATIVE is broken.
Yes, the syntax could probably be
FETCH [n] RELATIVE m
to keep the
Hmm ... now that's an interesting thought. So the input converter would
actively strip trailing blanks, output would add them back,
But how would the output know how many to put back?
The output routine would need access to the column typmod. Which it
would have, in simple SELECT
is it planed cursor out of a transaction in 7.4 ?
I do not think we will allow cross-transaction cursors ever.
What would
it mean to have a cross-transaction cursor, anyway? Does it show a
frozen snapshot as of the time it was opened? The usefulness of that
seems awfully low in
Problem is when I want change view (or functions) with a lot of dependecies
I must drop and recreate all dependent views (or functions) -
I want add only one column in view
I don't know if solution hard for that.
I do not see how adding a column to a view would invalidate
dependent
We received a query through the Advocacy site about whether we support
AIX 5.1 or not, so am trying to find out.
It should work. Andreas just submitted a port confirmation on AIX
4.3.2 ...
Do you feel there's anyone around that would be able to give a
definitite yes or no?
I don't have AIX 5 here, so cannot test, sorry.
But yes, it should definitely work.
There is a known possible performance improvement for concurrent sessions on
multiprocessor AIX machines. The now depricated cs(3) used for the AIX TAS
implementation should be replaced with
mydb=# create index str_idx_url on str(url) where url is not null;
CREATE
mydb=# analyze str;
ANALYZE
mydb=# EXPLAIN ANALYZE select id from str where url='foobar';
NOTICE: QUERY PLAN:
Seq Scan on str (cost=0.00..91.05 rows=3 width=4) (actual
You can try an index like:
create index
My suspicion falls on the very-recently-added awk calls. Try changing
(echo SET autocommit TO 'on';; awk 'BEGIN {printf
\\set ECHO all\n}'; cat $inputdir/sql/$1.sql) |
Why use awk for this at all ? and not:
echo \\set ECHO all
??
Andreas
---(end of
Are we ready for RC1 yet?
I think so. The NO_MKTIME_BEFORE_1970 issue was bothering me, but I
feel that's resolved now. (It'd be nice to hear a crosscheck from
some AIX users though...)
abstime, tinterval and horology fail on AIX.
The rest is now working (AIX 4.3.2 xlc 5.0.0.2).
I am
I have removed the NO_MKTIME_BEFORE_1970 symbol from irix5.h,
rebuilt 7.3b2, and reran the regression. The three time tests
(tinterval, horology, abstime) now match the Solaris expected files.
I checked the timezone files, and the system does not appear to
have savings time
I think so. The NO_MKTIME_BEFORE_1970 issue was bothering me, but I
feel that's resolved now. (It'd be nice to hear a crosscheck from
some AIX users though...)
abstime, tinterval and horology fail on AIX.
The rest is now working (AIX 4.3.2 xlc 5.0.0.2).
I am just now rebuilding
Tom Lane writes:
I think the problem is more accurately described thus: Flex generated
files include stdio.h before postgres.h due to the way it lays out the
code in the output. stdio.h does something which prevents switching to
the large file model later on in postgres.h. (This
Maybe it makes sense that in the proposal below the parameter
EXTRA_DIGITS could be SIGNIFICANT_DIGITS with a default value
of 15 and maximum 18.
Its more 'documentable' and maybe easy to understand in general.
Yes agree (or double_significant_digits or format_double_digits ?),
but
The problem with flex is, that the generated c file does #include unistd.h
before we #include postgres.h.
In this situation _LARGE_FILES is not defined for unistd.h and unistd.h
chooses to define _LARGE_FILE_API, those two are not compatible.
Yeah. AFAICS the only way around this is
Yeah. AFAICS the only way around this is to avoid doing any I/O
operations in the flex-generated files. Fortunately,
that's not much
of a restriction.
Unfortunately I do not think that is sufficient, since the problem is already
at the #include level. The compiler barfs on the
The question is *which* seek APIs we need to support. Are there any
besides fseeko() and fgetpos()?
On AIX we have
int fseeko64 (FILE* Stream, off64_t Offset, int Whence);
which is intended for large file access for programs that do NOT
#define _LARGE_FILES
It is
The question is *which* seek APIs we need to support. Are there any
besides fseeko() and fgetpos()?
On AIX we have
int fseeko64 (FILE* Stream, off64_t Offset, int Whence);
which is intended for large file access for programs that do NOT
#define _LARGE_FILES
It is functionality that is
The idea is not to have it accross multiple backends and having it in
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications
sending just a few complex queries to the server. The problem is: If you
have many
What I just committed uses your idea of auto-committing TRUNCATE et al,
but now that I review the thread I think that everyone else thought that
that was a dangerous idea. How do you feel about simply throwing an error
in autocommit-off mode, instead? (At least it's a localized
change
Can the magic be, that kaio directly writes from user space memory to the
disk ?
This makes more assumptions about the disk drive's behavior than I think
are justified...
No, no assumption about the drive, only the kaio implementation, namely, that
the kaio implementation reads the
ISTM aio_write only improves the picture if there's some magic in-kernel
processing that makes this same kind of judgment as to when to issue the
ganged write for real, and is able to do it on time because it's in
the kernel. I haven't heard anything to make me think that that feature
Keep in mind that we support platforms without O_DSYNC. I am not
sure whether there are any that don't have O_SYNC either, but I am
fairly sure that we measured O_SYNC to be slower than fsync()s on
some platforms.
This measurement is quite understandable, since the current software
I'd give you the first and third of those. As Andrew noted, the
argument that it's more standard-compliant is not very solid.
The standard doesn't say anything about transaction in this regard.
Yes, it sais statement.
Note also, that a typical SELECT only session would not advance
... most file systems can't process fsync's
simultaneous with other writes, so those writes block because the file
system grabs its own internal locks.
Oh? That would be a serious problem, but I've never heard that asserted
before. Please provide some evidence.
On a filesystem
Hmmm ... if you were willing to dedicate a half meg or meg of shared
memory for WAL buffers, that's doable.
Yup, configuring Informix to three 2 Mb buffers (LOGBUF 2048) here.
However, this would only be a win if you had few and large transactions.
Any COMMIT will force a write of
Attached is a patch to fix the mb linking problems on AIX. As a nice side effect
it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so
(all shlibs that are not postmaster loadable modules).
Can you explain the method behind your patch? Have you tried -bnogc?
Attached is a patch to fix the mb linking problems on AIX. As a nice side effect
it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so
(all shlibs that are not postmaster loadable modules).
Can you explain the method behind your patch? Have you tried -bnogc?
and mb conversions (pg_ascii2mic and pg_mic2ascii not
found in the postmaster and not included from elsewhere)
shared libs on AIX need to be able to resolve all symbols at linkage time.
Those two symbols are in backend/utils/SUBSYS.o but not in the postgres
executable.
They
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
select invoice_id from table order by invoice_id desc limit 1;
should get you the maximum fast if you have a unique index on invoice_id.
configure somehow thinks it needs to #define _LARGE_FILES though, which
then clashes with pg_config.h's _LARGE_FILES. I think the test needs to
#include unistd.h .
_LARGE_FILES is defined because it's necessary to make off_t 64 bits. If
you disagree, please post compiler output.
Ah,
shared libs on AIX need to be able to resolve all symbols at linkage time.
Those two symbols are in backend/utils/SUBSYS.o but not in the postgres
executable.
My guess is, that they are eliminated by the linker ? Do they need an extern
declaration ?
Further research prooved, that the
Further research prooved, that the AIX linker eliminates functions on a per
c file basis if none of them is referenced elsewhere (declared extern or not).
Thus it eliminates the whole conv.c file from the postgres executable since
those functions are only used by the conversion shared
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
Note that if you write, say,
set numericcol = numericcol * 3.14159;
my proposal would do the right thing since the constant would be typed
as numeric to start with and would stay that way. To do what you want
with a float variable
PS: pg snapshot 09/11 does not compile on AIX (large files (don't want
_LARGE_FILES),
Please provide details.
On AIX we would only want to make the large file api visible (_LARGE_FILE_API)
which automatically gets defined when xlc is used with -qlonglong.
#ifdef _LARGE_FILE_API
extern
Yes, that is the case where the new behavior would imho not be good (but you
say spec compliant). I loose precision even though there is room to hold it.
Lose what precision? It seems silly to imagine that the product of
Have you seen my example ? If calculated in float4 the
Have you seen my example ? If calculated in float4 the result of
1.01*1000.0-1000.0 would be 0.0, no ?
So? If you are storing one input as float4, then you cannot rationally
say that you know the result to better than 6 digits, because you don't
know the input
Note that if you write, say,
set numericcol = numericcol * 3.14159;
my proposal would do the right thing since the constant would be typed
as numeric to start with and would stay that way. To do what you want
with a float variable, it'd be necessary to write
set numericcol =
What I will do instead is adjust parse_coerce.c so that a
length-coercion function can have either of the signatures
foo(foo,int4) returns foo
or
foo(foo,int4,bool) returns foo
and then modify the above-mentioned length coercion functions to provide
the desired behavior. This
For numbers there is probably only the solution to invent an
anynumber generic type.
Actually, I had been toying with the notion of doing the following:
1. A numeric literal is initially typed as the smallest type that will
hold it in the series int2, int4, int8, numeric (notice NOT
I think there is some confusion here. The runtime checks Andreas was
talking about was allowing a double of 64.0 to cast to an int4 while
disallowing 64.1 from being cast to an int4 because it is not a hole
number.
Yes, and Tom's proposal for numbers is sufficient for constants, since
Sure it is. The float=int casts need to be made implicit, or we'll have
tons of problems like this.
Well, yeah. That did not seem to bother anyone last spring, when we
were discussing tightening the implicit-casting rules. Shall we
abandon all that work and go back to any available
I know this is not really related, but wouldn't the plan be to make
ecpg actually use the backend side execute ... now that it is available ?
Maybe I misunderstood something. Do you mean I could use the backend
PREPARE/EXECUTE to prepare and execute any statement I can
PREPARE/EXECUTE
We can revisit that decision if you like, but you must convince us that
it was wrong, not just say of course we should change it.
I am sorry, but at that time I did not have time for the discussion,
and now is also very tight for me :-(
Four reasons I can give:
1. execute xx(...);
Here are the proposals for solutioning the Return proper effected
tuple count from complex commands [return] issue as seen on TODO.
Any comments ?... This is obviously open to voting and discussion.
We don't have a whole lot of freedom in this; this area is
covered by the
SQL
What is the difference
between a trigger, a rule and an instead rule from a business process
oriented point of view? I think there is none at all. They are just
different techniques to do one and the same, implement
business logic in the database system.
The difference is how other db's
Oh, this is bad news. The problem we have is that rules don't
distinguish the UPDATE on the underlying tables of the rule from other
updates that may appear in the query.
If we go with Tom's idea and total just UPDATE's, we will get the right
answer when there is only one UPDATE in the
I don't think we should add tuple counts from different commands, i.e.
adding UPDATE and DELETE counts just yields a totally meaningless
number.
Agreed.
I don't think there is any need/desire to add additional API routines to
handle multiple return values.
Yup.
Can I get some votes
could you please make a complete table of all
possible situations and the expected returns? With complete I mean
including all combinations of rules, triggers, deferred constraints and
the like. Or do you at least see now where in the discussion we got
stuck?
Imho only view rules (==
I make a guess I've got this due to parallel running of a program making
bulk INSERTs/UPDATEs into that table. Mmm...I need a way to avoid the big
number of unused pages in such a case. LOCK TABLE?
Only UPDATEs and DELETEs (and rolled back INSERTs) cause unused pages.
The trick for other
Please correct me if I've got this wrong, but it appears from the SRF
API, that a SRF cannot readily refer to the TupleDesc to which it is
expected to conform (i.e. the TupleDesc derived from the FROM
clause of
an original SELECT statement) because that is held in the
executor state
and
Could the index scan be made to
handle cases where the index tuple it was stopped on is gone?
Don't see how. With no equal keys, you could test each tuple you scan
over to see if it's the expected key; but that would slow things down
tremendously I fear. In any case it fails
What we really need is some mode on template1 that says, I am not
world-writable, but the admin hasn't made me world-non-writable, so I
will create new databases that are world-writable. Does that make
sense?
I think template1 public should be non-world-writeable by default, and pass that
Ok, now I vote, that you don't implement any and use opaque.
I don't think we want two types that do the same thing.
Is it that you like the name any more than opaque ?
No, it's that I want to deprecate opaque so that we can catch old
uses that should not be there anymore. If you look
Hmm, any would sound like it is the same as opaque. Would any really be
all allowed types ? I think we would want to eliminate that
altogether.
Do you plan on eliminating the COUNT() aggregate, then?
Ah, you want it for aggbasetype in pg_aggregate, I did not
see that.
How could we
May be a plan could be to leave opaque, but throw a notice
when it is used in a create stmt, like:
NOTICE: the use of type OPAQUE should be avoided where possible
Right, that's exactly the plan.
Actually, I think we can tighten the use of OPAQUE quite a bit, too.
The only supported
In my paper I use C functions that take any tuple.
I do not yet see how I can do that without opaque if we don't
have a row but only a trigger type.
For that you would have to use any, at the moment. This would give
you the same amount of type safety you have with opaque, ie, none.
If we are going to delay beta, we should decide now, not at the end of
August, and the delay should be until the end of September. The big
question is whether we have enough material to warrant a delay.
I think the implicit casts todo should be adressed soon,
not sure there is enough time
For that you would have to use any, at the moment. This would give
you the same amount of type safety you have with opaque,
ie, none.
I would have to use some pg_proc magic to make any appear there,
since the plan was to not make it visible at the sql level, no ?
Huh? It'll be
I did not mean visible, I meant useable, like in
create function xx(any) returns text ...;
If that is possible, what is the difference to opaque ?
any will have the same behavior that opaque used to have, yes.
Ok, now I vote, that you don't implement any and use opaque.
I don't
with Perl and *using placeholders and bind values*, the application
developer has not to worry about this. So, usually I don't check the
values in my applications (e.g. if only values between 1 and 5 are
allowed and under normal circumstances only these are possible), it's the
task of
The cash_out problem can't really be fixed until we do something about
subdividing type opaque into multiple pseudo-types with more carefully
defined meanings. cash_out is declared cash_out(opaque) which does not
really mean that it accepts any input type ... but one of the several
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
Would it be possible to update the system tables, so that cash_out does not take
opaque but really takes type money ?
That is part of the solution, but only part: we have hundreds of
functions that take opaque because we don't
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
Hard to say what is good for those names imho, don't like
anytype :-(
How about any? It's a reserved word per SQL99, I think.
I would actually stick to opaque in that case, already used in other db's.
I like cstring, void
Hard to say what is good for those names imho, don't like
anytype :-(
How about any? It's a reserved word per SQL99, I think.
I would actually stick to opaque in that case, already used in other db's.
I want to change the name because (a) we are changing the semantics,
(b) we
Seems with above you are not able to constrain what qualifies for a
supertable row, you would only be able to specify constraints that
apply to all it's subtables.
Yes, that's the whole point. If I have a constraint on a table, I think
it should *never* be possible for that constraint
Yes, that's the whole point. If I have a constraint on a table, I think
it should *never* be possible for that constraint to be violated. If a
subtable should not have constraint the supertable has, it shouldn't
inherit from the supertable.
If you want that, you simply need to
It's nonlocal constraints that are the problem, and here foreign keys
and UNIQUE constraints are certainly the canonical examples. Both of
these would be largely solved with table-spanning indexes I think.
Note that the other obvious way to solve this would be to store all of
the
OK, seeing as no one voted, and only Tom and I objected originally, we
will keep the code as Thomas has applied it, namely that PGXLOG/-X is
recognized by initdb, postmaster, postgres, and pg_ctl.
We will? It looks to me like Thomas lost the vote 2-to-1.
Unless there are more votes,
Looking at how to deal with this, is the following going to be
portable?:
in pg_dump/Makefile:
CFLAGS += -D_LARGEFILE_SOURCE -D_OFFSET_BITS=64
in pg_dump.h:
#ifdef _LARGEFILE_SOURCE
#define FSEEK fseeko
#define FTELL ftello
#define
Hmm. I think this consideration boils down to whether the WHERE clause
can give different results for rows that appear equal under the rules of
UNION/EXCEPT/INTERSECT. If it gives the same result for any two such
rows, then it's safe to push down; otherwise not.
It's not too difficult
NAMEDATALEN - disk/performance penalty for increase, 64, 128?
FUNC_MAX_ARGS - disk/performance penalty for increase, 24, 32?
At the moment I don't see a lot of solid evidence that increasing
NAMEDATALEN has any performance penalty. Someone reported about
a 10% slowdown on pgbench
But the message I was replying to was a similar union query, and I was
thinking that that person might be having a similar initial intuitive
reaction, well, it looks kinda the same. I just wanted to note that
you need to check this stuff with explain, rather than
blindly assuming
you know
Seems more accurate, but actually you may also have two or more
conditional rules that cover all possibilities if taken together.
Maybe
ERROR: Cannot insert into a view
You need an ON INSERT DO INSTEAD rule that matches your INSERT
Which covers both cases.
Actually not:
Since I see a huge benefit in allowing conditional rules for a view,
I think it is worth finding a solution.
We do allow conditional rules for a view. You just have to write an
unconditional one too (which can be merely DO INSTEAD NOTHING).
Hmm, but you cannot then trow an error, but
Now when creating a function you can do:
CREATE FUNCTION foo(text) RETURNS setof RECORD ...
And when using it you can do, e.g.:
SELECT * from foo(sqlstmt) AS (f1 int, f2 text, f3 timestamp)
Why is there the requirement to declare the type at SELECT
time at all? Why
not
Tom wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
That doesn't quite work, because then no ordinary user can define a cast
from some built-in type to his own type. What I'm thinking about is to
implement the USAGE privilege on types, and then you need to have that to
be allowed to
Bruce wrote:
Actual error code numbers/letters. I think the new elog levels will
help with this. We have to decide if we want error numbers, or some
pneumonic like NOATTR or CONSTVIOL. I suggest the latter.
Since there is an actual standard for error codes, I would strongly suggest
to
However, I'm not happy with the way dropped columns are renamed. I want to
give them a name that no-one would ever want to use as a legit column name.
I don't like this behaviour:
Yes, how about prepending a character that would usually need to be escaped.
I like Hannu's proposal with the
I guess the strangest part was that both a.foo = 'S' and b.foo = 'S' but
not a.foo=b.foo; (a.foo is varchar(5) , b.foo is char(5) )
I guess that tha 'S' that b.foo gets compared to is converted to 'S'
before comparison but when comparing varchar(5) and char(5) they are
both compared
Has anyone studied how other DBMSs handle CHAR vs VARCHAR? Judging
from the number of questions we get on this point, I have to wonder
if we are not out of step with the way other systems do it.
Well, I already gave the Informix example, that compares them as equal.
(they obviously coerce
Changing data types probably won't appear. I don't know of anyone
working on it -- and it can be quite a complex issue to get a good
(resource friendly and transaction safe) version.
I'd be happy with a non-resource friendly and non-transaction-safe version
over not having
OK, so you do a tar backup of a file. While you are doing the tar,
certain 8k blocks are being modified in the file. There is no way to
know what blocks are modified as you are doing the tar, and in fact you
could read partial page writes during the tar.
No, I think all OS's (Unix and NT
As noted, one of the main problems is knowing where to begin
in the log. This can be handled by having backup processing
update the control file with the first lsn and log file
required. At the time of the backup, this information is or
can be made available. The control file can be
No, what I envisioned was a standalone dumper that can produce dump output
without having a backend at all. If this dumper knows about the various
binary formats, and knows how to get my data into a form I can then restore
reliably, I will be satisfied. If it can be easily automated so
Let me re-write it, and I'll post it in the next version. The section
dealt with what to do when you have a valid restored controlfile from a
backup system, which is in the DB_SHUTDOWNED state, and that points to a
valid shutdown/checkpoint record in the log; only the checkpoint record
Anyway, I am pretty sure that PostgreSQL is not the culprit here. As it
happens this project is back on the table for me so it is interesting that
your email popped up now. I just compiled the latest version of PostgreSQL
on my AIX system and it generated lots of errors and then
On Mon, June 03 Bruce wrote:
On Wed, May 08, 2002 at 06:47:46PM +0200, Zeugswetter SB SD Andreas wrote:
When we are talking about the places where you need double escaping
(once for parser, once for input function) to make it work, I would also
say that that is very cumbersome (not
FOR row IN select_query LOOP
statements
RETURN NEXT row;
END LOOP;
Informix has
RETURN x1, x2, x3 WITH RESUME;
This seems reasonable to me. PostgreSQL could also allow
return x with resume, where x is already a composite type.
Andreas
---(end of
This is the first time I have ever seen repeated pgbench runs without
substantial performance degradation. Not a bad result for a Friday
afternoon...
Congatulations :-) This sounds great !!!
Andreas
---(end of broadcast)---
TIP 5: Have you
4. How exactly should a killed index tuple be marked on-disk? While there
is one free bit available in IndexTupleData.t_info, I would prefer to use
that bit to expand the index tuple size field to 14 bits instead of 13.
(This would allow btree index entries to be up to 10K when BLCKSZ is 32K,
The select(min) and select(max) took as long as the table scan to find
the count. It seems logical if a btree type index is available (such
as pk_cnx_ds_sis_bill_detl_tb) where the most significant bit of the
index is the column requested, it should be little more than a seek
first or seek
EXPLAIN
tour=# explain analyze select * from tours where
( operator_id in (2,3,4,5,7) and type_id = 4 ) or
( operator_id = 8 and type_id = 3);
NOTICE: QUERY PLAN:
Index Scan using type_idx, type_idx, type_idx, type_idx, type_idx, type_idx on tours
(cost=
The lines marked XXX are the ones that I enabled since yesterday, and
would like to disable again:
implicit | result|input|prosrc
--+-+-+--
no | varchar | int8|
So the work that would need to be done is asking the driver to request the
timeout via BEGIN WORK TIMEOUT 5; getting the backend to parse that
request and set the alarm on each query in that transaction; getting the
Well imho that interpretation would be completely unobvious.
My first guess
Do we want the above syntax, or this syntax:
ALTER TABLE blah ALTER COLUMN col SET NOT NULL;
ALTER TABLE blah ALTER COLUMN col SET NULL;
My only objection to the second command is that it's plain wrong. You
don't set anything to NULL, so don't make the command look like it.
Imho it
Removing the special-case logic in ExecTypeFromTL yields
regression=# SELECT update_pg_pwd();
ERROR: getTypeOutputInfo: Cache lookup of type 0 failed
Wouldn't it be nice to make this a feature that allows
stored procedures (void update_pg_pwd ()) ? Correctly register
this function to not
I'd want it to error out on INSERT foo (bar.col), though ;-)
And on INSERT foo (bar.foo.col) as well.
Why accept above at all ? Seems much too error prone, I would eighter
accept table with schema or without schema, mixing both cases seems
unnecessarily confusing and error prone to me.
Tom Lane writes:
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
Imho one of the biggest sources for problems is people creating new
indexes on populated tables when the rest of the db/table has badly
outdated statistics or even only default statistics in place.
In this situation
101 - 200 of 229 matches
Mail list logo