It also seems that, once you get it up and running, any worthwhile dev
management system is going to actually take less time / effort to
maintain than, say, maintaining manually concocted todo lists and
coordinating development via a mailing list.
Call me a normaliser, but even if the maintenance
Bruce Momjian wrote:
If people have GIST TODOs, please post them.
Concurrency :)
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
PROTECTED]
To: Christopher Kings-Lynne [EMAIL PROTECTED]
References: [EMAIL PROTECTED]
[EMAIL PROTECTED]
Hi,
I haven't done any significant progress on that way because of lack of
free time.
Beside this, I'm recently changed my job and now I'm woking for MySQL.
I think it's not possible for me
I suppose you are running on some BSD variant? BSD is notorious for
promising more than it can deliver with respect to number of open files
per process. This is a kernel bug, not a Postgres bug.
You can adjust Postgres' max_files_per_process setting to compensate for
the kernel's lying about its
Tom mentioned that he had not had these security concerns raised before. From
my point of view I just have no idea about the level of information offered
to any given user and am scared to run PostgreSQL in an ISP shared
environment because of it. I am sure I can secure people from connecting
In 8.0, the de facto default encoding is no longer SQL_ASCII, so that problem
should go away over time. Certainly, making 7-bit ASCII the default encoding
is not an option.
You sure?
---(end of broadcast)---
TIP 7: don't forget to increase your
As lead phpPgAdmin developer, I'm officially in favour of them. The
main reason being all the extra fruit they have that shows database
size, etc.
As non-lead phpPgAdmin developer, I'd be against using them in phppgadmin.
(note this doesnt mean I am against them in pgsql itself)
Hehe, talk about
Hi,
A few days back the load increased on our database server to the point
where it could not get enough file handles. This causes the backends to
crash, get restarted only to crash again, on and on.
We fixed it by bumping kern.maxfiles, but was just wondering if this is
a scenario that
A few days back the load increased on our database server to the point
where it could not get enough file handles. This causes the backends to
crash, get restarted only to crash again, on and on.
We fixed it by bumping kern.maxfiles, but was just wondering if this is
a scenario that
I suppose you are running on some BSD variant? BSD is notorious for
promising more than it can deliver with respect to number of open files
per process. This is a kernel bug, not a Postgres bug.
Good guess. Freebsd 4.8 or so.
Chris
---(end of
We are currently seeing a whole lot of complaints due to the fact that
8.0 tends to default to Unicode encoding in environments where previous
versions defaulted to SQL-ASCII. That says to me that a whole lot of
people were getting along just fine in SQL-ASCII, and therefore that
moving further
perhaps the CRC-32 routines could be written in in-line assembler
If you can do this, step right up. :-)
Best Regards, Simon Riggs
Surely there's an open source code floating around somewhere?
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9'
Please check the web site version. Someone has already implemented
Allow COPY to optionally include column headings in the first line.
As far as XML, there has been discussion on where that should be done?
In the backend, libpq, or psql. It will need discussion on hackers. I
assume you have
I suppose that we can't change the semantics of SQL_ASCII without
backwards compatibility problems. I wonder if introducing a new encoding
that only allows 7-bit ascii, and making that the default, is the way to
go.
A while back I requested a new encoding that is '7BITASCII'. It would
be
FWIW, I don't see the issue as internal vs external at all. What's
bothering me is whether these views can be considered sufficiently
more stable and better designed than the physical system catalogs
to justify recommending that application designers should rely on
the views instead of the
Try selecting from pg_views to see if it exists, then if it does, drop it.
Chris
[EMAIL PROTECTED] wrote:
Hai,
I need a clarification for the below:
I need to check for the existence of a user defined view named 'audit_vw'
and if exists, then i need to delete the same. Please help me to solve the
As Dave already pointed out, serious admin tools will avoid views. We
have to deal with version specific issues anyway.
I don't see why phpPgAdmin would avoid using the views, unless some
serious randomness happened that we had to support. The unimaginable
craziness of currently trying to
Hmmm ... we argued about this. I was in favor of hiding the OIDs because OIDs
are not consistent after a database reload and names are. I can see your
point though; what do other people think?
Well phpPgAdmin is unable to use the pg_tables view, for instance,
because we have no way of
2. Almost all of the information that cannot fit will be useful to
other database systems as well, and should be suggested to the ANSI/ISO
committee. Since INFORMATION_SCHEMA is a very new idea (only two
adopters that I know of so far) I expect it will need to grow and
PostgreSQL could be one of
Is there a good reason that pl/pgsql is not installed in databases by
default?
I think it should be. pl/pgsql is widely used, and having it installed
by default would be one less hurdle for newbies to overcome when
learning PostgreSQL. It would also make it easier to distribute
applications
Quite, but in the meantime, a good benchmark should stress the system
enough to cause crashes, lockups or at least incorrect results if a
bug is introduced in the shared memory or semaphore code, and will
definitely reveal any slowdowns introduced by new code, so my question
is: where can I find a
Yup, and *everyone* with commit accesss has access to *everything* ... I
could intruduce a 1 bit change to one of the kernel sources and there is
a chance that nobody would ever notice it ... and this includes (or, at
least, the last time I did any work) port committers ...
Using cvsacls could
Using cvsacls could deal with that particular problem. Take the PHP
project's 1500 committers, and how they can only modify particular files.
cvsacls? got a URL for that that I can read?
http://sourceforge.net/docman/display_doc.php?docid=772group_id=1#top
Chris
---(end
Hi Evgen,
I just keep pinging this patch thread every once in a while to make sure
it doesn't get forgotten :)
How is the syncing with 8.1 CVS coming along?
Chris
Evgen Potemkin wrote:
Hi hackers!
I have done initial implementation of SQL99 WITH clause (attached).
It's now only for v7.3.4 and
Are there any regression tests or unit tests beyond 'make check', or
possibly benchmarks which not only measure performance but also verify
that the results are correct? I have patches which I want to test
under high load from multiple concurrent clients, so 'make check'
isn't enough. Google has
Well, if process A loses the connection to the client,
then the
transaction will be rolled back and other processes will
be able to
continue.
Never. Process do waits until it is killed or canceled. for
example unplugged network cable or crashes client machine
or in case of lost of network
There is a fairly lengthy discussion going on right now on the bizgres
mailing list about this topic, if your interested in helping out you
might want to join that list.
What's the point of keeping such backend development discussion separate
from the -hackers list? It's always been a mistake
And finally, we have a few companies working on features that they
eventually want merged back into the PostgreSQL codebase. That is a
very tricky process and usually goes badly unless the company seeks
community involvement from the start, including user interface,
implementation, and coding
I wonder if there's any use for an allow_large_objects = true/false GUC
parameter?
It'd be nice to be able to switch it off as part of site policy so that
the security holes in it aren't able to be exposed, plus you can
guarantee as the site admin that pg_dumpall will produce a complete
dump.
Security holes? Explain yourself please.
No ownership, and no permissions...
As for the latter point, ISTM the todo item is fix pg_dumpall more
than eliminate large objects. Certainly the fix isn't easy, but
that isn't an argument to cut and run.
I did have a plan to do this for 8.1, but so far
Hi,
Can you put a foreign key constraint on an array column that says that
each element of the array must match a primary key?
If not, is this a TODO perhaps?
Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
Should that sequence really stick around as an integer, numeric and text
field???
What are you unhappy about exactly? We expended a fair amount of sweat
to make it behave just like that ...
It's confused the odd IRC user (pgsql newbie). Seems like it breaks the
'serial type' illusion...
I
---
Christopher Kings-Lynne wrote:
I was thinking of writing a command line tool like 'pgtune' that looks at
the stats views and will generate SQL code for, or do automatically the
following:
* Dropping indices that are never used
* Creating appropriate indices to avoid large, expensive sequential scans
+0800, Christopher Kings-Lynne wrote:
Luckily, PG 8 is available for this. Do you have a short example?
No, and I think it should be in the manual as an example.
You will need to enter a loop that uses exception handling to detect
unique_violation.
Pursuant to an IRC discussion to which Dennis
Should that sequence really stick around as an integer, numeric and text
field???
test=# create table test (a serial);
NOTICE: CREATE TABLE will create implicit sequence test_a_seq for
serial column test.a
NOTICE: CREATE TABLE will create implicit sequence test_a_seq for
serial column test.a
From a people who call me perspective. I am never asked about
inheritance. Most of the people don't even know it is there.
The requests I get are:
Just wondering, does anybody asks you about the excessive locking (and
deadlocking) on foreign keys? The business about being able to drop
users and
Hi guys,
After working on PHP for a few weeks, I see that what they do with their
interactive docs is have any comments posted get emailed to the docs
list. Do we do this?
That was, good comments are immediately integrated into the manual.
Chris
---(end of
I did 'gmake distclean' and rebuild and I still get the attached failures.
Chris
parallel group (13 tests): text name char varchar boolean oid int8 int2 float4
int4 float8 bit numeric
boolean ... ok
char ... ok
name ... ok
varchar
Do you build in a separate directory? I do and I do have problems when
the grammars (main or plpgsql) get updated -- not sure why the derived
files from bison and flex don't get rebuilt. I just delete them by
hand. (The build directory I just rm -fr as a whole).
Yours doesn't seem like a
It seems that's not much of a danger -- the interactive Postgres documentation
hardly gets any comments at all in the first place. It would be a big
improvement if there were some way to encourage many more comments.
Only link to the version with comments.
Chris
---(end of
So some changing-oid operations like vacuum full, reindex, etc will not
affect these system catalogs?
Is it possible to cluster system tables?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Is PostgreSQL supposed to enforce a LATIN1/ISO-8859-1 encoding if that's
the database encoding?
AFAIK, there are no illegal characters in 8859-1, except \0 which we
do reject.
Hmmm...
It turns out I was confused by the developer who reported this issue.
Basically they have a requirement that
Given all the problems with unwanted recoding I've seen, I think such an
encoding should be the default instead of unchecked-8-bits SQL_ASCII :-(
I agree, but that would be a nightmare of backwards compaitibility :D
Chris
---(end of broadcast)---
TIP
Is PostgreSQL supposed to enforce a LATIN1/ISO-8859-1 encoding if that's
the database encoding?
Because people using this database can happily insert any old non-LATIN1
junk into the database, then when I export as XML, all XML validation
fails because the encoding is not correct.
If this is
Hi,
1. Fire up psql.
2. Press Ctrl-4
3. Crash, core dump...
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
1. Fire up psql.
2. Press Ctrl-4
3. Crash, core dump...
SIGQUIT is supposed to do that.
Seems to be a libreadline thing:
#0 0x2827fd60 in sigprocmask () from /usr/lib/libc.so.4
#1 0x281f5adb in _rl_savestring () from /usr/lib/libreadline.so.4
#2 0xbfbfffac in ?? ()
#3 0x281f2254 in
postgres[30059]: [97-1] LOG: statement: INSERT INTO group_data (this_group_id,
item_text, link_path) VALUES ($1, $2, $3)
I really need to know the *real* arguments... How can I get them? Is it a bug?
The bug was that prepared statements didn't work properly in the past. That is
the statement
I am thinking we should allow exit by falling off the end of the
function when (a) it has output parameter(s), or (b) it is declared
RETURNS void. Comments?
I agree - makes sense.
Chris
---(end of broadcast)---
TIP 6: Have you searched our list
I don't think that's a bug. You may not intend ever to cluster on that
index again, and if you try it will tell you about the problem.
Except it breaks the 'cluster everything' case:
test=# cluster;
ERROR: cannot cluster when index access method does not handle null values
HINT: You may be
Hey guys,
The 'Unicode characters above 0x1' issue keeps rearing its ugly head
in the IRC channel. I propose that it be fixed, even backported...
This is John Hansen's most recent patch to fix it:
http://archives.postgresql.org/pgsql-patches/2004-11/msg00259.php
And from what I can tell it
pgfoundry.org and www.pgfoundry.org currently redirect to hub.org...
Chris
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hmmm, seems to have been fixed now...
Christopher Kings-Lynne wrote:
pgfoundry.org and www.pgfoundry.org currently redirect to hub.org...
Chris
---(end of broadcast)---
TIP 8: explain analyze is your friend
---(end
But half of the postgresql functions are in the grammar anyway -
they're not even listed.
Should we look at adding stub functions into pg_proc for \df display
somehow?
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Hi Teodor,
What exactly did you fix here?
Chris
Teodor Sigaev wrote:
I found several unpleasant blot in comparing functions and commit
changes to 7.4, 8.0 and head. Pls check (it need just to recompile .so
file)
Christopher Kings-Lynne wrote:
It's cached. This select should run only one time
d) Bringing PL/Java into core will force a consistent documentation
and, I imagine, a chapter of it's own in the main docs. I'm happy
to write most of it but English is not my native language. Whatever
I put into print will always benefit from a review.
There is nothing stop'ng a chapter being
to 7.4, 8.0 and head. Pls check (it need just to recompile .so
file)
Christopher Kings-Lynne wrote:
It's cached. This select should run only one time per connection for
each used dictionary. If its'not then it's a bug. I'll check it.
It probably is then - although I do use a persistent connection
You can drop a NOT NULL on a column, even if that column is part of an
index that is clustered, where the index does not index NULLs.
Also, I dont think that ALTER TABLE blah CLUSTER ON foo; actually warns
about clustering a non-null indexing index. However, CLUSTER foo ON
blah; does.
Chris
It's cached. This select should run only one time per connection for
each used dictionary. If its'not then it's a bug. I'll check it.
It probably is then - although I do use a persistent connection pool,
but I wouldn't have thought that'd use more than a new connection every
once in a while?
Sorry, was in a rush before. I still don't have time to fix this for
8.0.2, so that's why I rushed out the report. Here is a full description...
You can drop a NOT NULL on a column, even if that column is part of an
index that is clustered, where the index does not index NULLs.
First, install
I don't think that's a bug. You may not intend ever to cluster on that
index again, and if you try it will tell you about the problem.
Except it breaks the 'cluster everything' case:
test=# cluster;
ERROR: cannot cluster when index access method does not handle null values
HINT: You may be able
There is an updated survey of open source developers:
http://flosspols.org/survey/survey_part.php?groupid=sd
Chris
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Uh, who exactly agreed to that? I know when I do \df it's generally
to check out built-in functions not my own. I don't see this as an
improvement.
I only ever use \df to look at my own functions...
I'd prefer if no system functions were listed :)
Chris
---(end of
I use df to see what functions are available. I want to see them all.
But half of the postgresql functions are in the grammar anyway -
they're not even listed.
Chris
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an
But half of the postgresql functions are in the grammar anyway -
they're not even listed.
Should we look at adding stub functions into pg_proc for \df display
somehow?
I'm not suggesting that at all. I was just pointing out that \df isn't
a useful view of 'what functions does postgresql have
I see this in my PQA analyzed PostgreSQL log:
Slowest queries
select dict_init, dict_initoption, dict_lexize from pg_ts_dict where oid
= $1
It's my number one slowest query apparently!
Can that lookup perhaps be cached in some way?
I notice that there is no unique index on the oid
One small objection is that we'd lose the ability to separately display
the time spent building the hash table in EXPLAIN ANALYZE output. It's
probably not super important, but might be a reason to keep two plan
nodes in the tree.
Would a separate hash node help for these kinds of queries in the
Fix two bugs in change_owner_recurse_to_sequences: it was grabbing an
overly strong lock on pg_depend, and it wasn't closing the rel when done.
The latter bug was masked by the ResourceOwner code, which is something
that should be changed.
I assume that this behaviour makes change owner on a table
On this page:
http://www.postgresql.org/docs/8.0/interactive/creating-cluster.html
It has this sequence:
root# mkdir /usr/local/pgsql/data
root# chown postgres /usr/local/pgsql/data
root# su postgres
postgres$ initdb -D /usr/local/pgsql/data
However, initdb will fail in this case, since the
Sorry, on further investigation this seems to work for everyone except
that guy - weird.
Chris
Christopher Kings-Lynne wrote:
On this page:
http://www.postgresql.org/docs/8.0/interactive/creating-cluster.html
It has this sequence:
root# mkdir /usr/local/pgsql/data
root# chown postgres /usr/local
When you turn on \x mode for query output in psql, it wrecks the output
of \d table, etc.
Should we change it so that the \d is unaffected by \x? What about for
other \d commands?
Well, they asked for \x so why is it wrong for us to \x the \d output
like we do now?
Because I have the feeling
Hey guys,
I really need answer to this one, for the PHP code I just committed :P
Chris
Christopher Kings-Lynne wrote:
Hi,
If I use PQgetCopyData, PQputCopyData and PQputCopyEnd against a v2
protocol backend, will it work?
I see no mention of it in the docs...
Chris
If you want to be my friend forever, then fix CLUSTER so that it uses
sharerowexclusive as well :D
Chris
Neil Conway wrote:
Neil Conway wrote:
AndrewSN pointed out on IRC that ALTER TABLE ... ADD FOREIGN KEY and
CREATE TRIGGER both acquire AccessExclusiveLocks on the table they are
adding
We have been telling people to use newer pg_dump's on older servers, but
we only support reloading into the current PostgreSQL version, so I see
no reason not to updated it to the current syntax.
We added the new syntax in 7.3.
Added to TODO:
o Update pg_dump to use the newer COPY syntax
I think
Oh, if we do that, do we disallow connecting to older servers?
Not at all, since the logic would be like this:
if we have new copy functions
and we have protocol version function
and protocol version = 3
then use new copy functions
else
use old copy functions
That would be even
If I use PQgetCopyData, PQputCopyData and PQputCopyEnd against a v2
protocol backend, will it work?
I see no mention of it in the docs...
OK, my testing proves that they work just fine against an older server,
so no problem.
Chris
---(end of
If you want to be my friend forever, then fix CLUSTER so that it uses
sharerowexclusive as well :D
I don't think it's as easy as that, because you have to move tuples
around in the cluster operation. Same sort of issue as vacuum full I would
suggest.
Cluster doesn't move rows...
I didn't say it
Huh, cluster already does that.
It does and it doesn't. Something like the first thing it does is muck
with the old table's filenode IIRC, meaning that immediately the old
table will no longer work.
Chris
---(end of broadcast)---
TIP 8: explain
From src/backend/tcop/postgres.c:
appendStringInfo(str,
!\t%ld/%ld [%ld/%ld] filesystem blocks in/out\n,
r.ru_inblock - Save_r.ru_inblock,
/* they only drink coffee at dec */
r.ru_oublock - Save_r.ru_oublock,
When you turn on \x mode for query output in psql, it wrecks the output
of \d table, etc.
Should we change it so that the \d is unaffected by \x? What about for
other \d commands?
Chris
---(end of broadcast)---
TIP 3: if posting/reading through
ANSI SQL allows at most one OUT parameter for a function (which can be
used instead of having the function return a value via the usual means).
OK, so that answers my question above: a single OUT parameter should be
equated to an ordinary return value, not a RECORD, so as to emulate this
aspect of
Should psql and pg_dump be upgraded to use the new v3 protocol copy
functions if they are available, as they are currently using the
deprecated API.
Chris
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Hi,
If I use PQgetCopyData, PQputCopyData and PQputCopyEnd against a v2
protocol backend, will it work?
I see no mention of it in the docs...
Chris
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
Even with Magnus' explanation that we're talking Hardware, and not OS
risk issues, I still think that the default should be the least risky,
with the other options being well explained from both a risk/performance
standpoint, so that its a conscious decision on the admin's side ...
Any 'risk
Will PQexecParams automatically escape bytea data as it goes in, or must
one run it through PQescapeBytea first?
Neither. The data does not need escaping (assuming you pass it as a
parameter, of course.)
Even binary data? ie. You could upload a binary string straight into
PQexecParams with no
Uh, but that's what the BSD license allows --- relicensing as any other
license, including commercial.
The point remains that Chris, by himself, does not hold the copyright on
the PG docs and therefore cannot assign it to anyone.
ISTM the PHP guys are essentially saying that they will only take
Using libpq PQconnect function, what is the syntax for the 'options'
entry in the conninfo?
I think it's stuck straight into the backend command line, so whatever
you read in the 'postgres' reference page applies.
Oh, I thought it was for setting GUCs at connect time. Is that
possible? I
This may be totally irrelevant:
Our current load distributors, like pgpool, have no way of knowing the
side effects of backend functions. It would be interesting if the
client could send each potential query to the master saying, execute
this query if there are side effects, otherwise do no
I really don't intend to do that, and it does seem to happen a lot. I am
the first to admit I lack tact, but often times I view the decisions made
as rather arbitrary and lacking a larger perspective, but that is a rant I
don't want to get right now.
Perhaps it's your lack of a real name and
Perhaps it's your lack of a real name and complete anonyminity (hence
invulnerablility) that gets to people...
Is it fixed?
Yeah, hi Mark :)
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Will PQexecParams automatically escape bytea data as it goes in, or must
one run it through PQescapeBytea first?
Chris
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's
I'm currently adding support for the v3 protocol in PHP pgsql extension.
I'm wondering if anyone minds if I lift documentation wholesale from
the PostgreSQL docs for the PHP docs for these functions. For instance,
the fieldcodes allowed for PQresultErrorField, docs on
PQtransactionStatus,
Hi,
Using libpq PQconnect function, what is the syntax for the 'options'
entry in the conninfo? I think the docs should be updated to give an
example..
http://www.postgresql.org/docs/8.0/interactive/libpq.html#LIBPQ-CONNECT
Thanks,
Chris
---(end of
Is there any compression or what?
Yes, there is:
http://www.postgresql.org/docs/8.0/interactive/storage-toast.html
thanks, is there any way to increase the limit, upper wich toast strategy
is selected? By defaullt is Block_size/4 = about 2000 Bytes.
Dunno, but you can alter the column and go 'set
BUT after clustering triples according to an index on att1:
select relname, relpages from pg_class ;
relname | relpages
-+--
triples | 142 (8KB/buffer)
142 * 8 * 1024 = 1,163,264 Bytes
Is there any
Hi Evgen,
How's the syncing with HEAD going?
Cheers,
Chris
Evgen Potemkin wrote:
Ok, I'm started porting it to 8.0.1 and will fix this also.
By the way, did you know any test suit for such queries? To make some
regression test.
Regards, Evgen
I tested you patch, and it's good work. I would all
Do you have a way to revert to the old installation to check whether
the checks fail again? It might be useful to track down exactly
what happened. It seems wrong that a currently-installed version
should have adverse effects on a just-built version's regression
tests.
No :)
Do you have a way to revert to the old installation to check whether
the checks fail again? It might be useful to track down exactly
what happened. It seems wrong that a currently-installed version
should have adverse effects on a just-built version's regression
tests.
We've seen that happen
Comments? Can anyone confirm whether DB2 or other databases allow
ungrouped column references with HAVING?
Oracle does not allow such references. It issues ORA-00979: not a
GROUP BY expression when you try to hand it such a reference.
MS SQL Server does not allow such references either, yielding
The current _pg_keypositions function generates the numbers from 1 to 32
using a massive union, shouldn't it just use generate_series instead (to
make it faster/simpler)?
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free
Only for very small values of current ...
Argh! Forgot it was a 7.4 server :) Oops.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
301 - 400 of 2006 matches
Mail list logo