On Sun, 2007-01-21 at 13:28 -0500, Tom Lane wrote:
Jaime Casanova [EMAIL PROTECTED] writes:
On 1/21/07, Simon Riggs [EMAIL PROTECTED] wrote:
- continue on error i.e. COMMIT can/might succeed - though there are
still cases where it cannot, such as a serializable exception.
and what
On Sun, 2007-01-21 at 14:26 -0600, Jim C. Nasby wrote:
On Sun, Jan 21, 2007 at 11:39:45AM +, Heikki Linnakangas wrote:
Russell Smith wrote:
Strange idea that I haven't researched, Given Vacuum can't be run in a
transaction, it is possible at a certain point to quit the current
On Sat, 2007-01-20 at 18:08, Merlin Moncure wrote:
[snip]
To be honest, I'm not a huge fan of psql tricks (error recovery being
another example) but this could provide a solution. in your opnion,
how would you use \if to query the transaction state?
Wouldn't it make sense to introduce
Hi there,
Is is possible to stop all user access to postgres, but still give access to
admin?
Just temporarily, not a security setup.
Something like, stop all users but allow user x and y.
thx
---(end of broadcast)---
TIP 5: don't forget to
[EMAIL PROTECTED] wrote:
Hi there,
Is is possible to stop all user access to postgres, but still give
access to admin?
Just temporarily, not a security setup.
Something like, stop all users but allow user x and y.
You could restart in single user mode, or alter pg_hba.conf to allow the
Am Montag, 22. Januar 2007 10:32 schrieb [EMAIL PROTECTED]:
Is is possible to stop all user access to postgres, but still give access
to admin?
Make the appropriate adjustments to pg_hba.conf.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of
Hello,
There is a file 'compat_informix-dec_test-OpenBSD3.8.broken.stdout'
under the path 'src/interfaces/ecpg/test/extedted/' in the recent
ftp snapshot (postgresql-snapshot.tar.gz).
All of the other files are placed under 'postgresql-snapshot/...'.
Is this intentional or a mistake?
Regards,
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case. Index(es) are typically only a
fraction of the size of the table, and since 8.2 we
On Mon, Jan 22, 2007 at 08:00:26PM +0900, ITAGAKI Takahiro wrote:
There is a file 'compat_informix-dec_test-OpenBSD3.8.broken.stdout'
under the path 'src/interfaces/ecpg/test/extedted/' in the recent
ftp snapshot (postgresql-snapshot.tar.gz).
All of the other files are placed under
On 1/22/07, Csaba Nagy [EMAIL PROTECTED] wrote:
On Sat, 2007-01-20 at 18:08, Merlin Moncure wrote:
[snip]
To be honest, I'm not a huge fan of psql tricks (error recovery being
another example) but this could provide a solution. in your opnion,
how would you use \if to query the transaction
I've been looking at the way we do vacuums.
The fundamental performance issue is that a vacuum generates
nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
spread the cost like part payment, but the total is the same. In an I/O
bound system, the extra I/O directly leads to
On 1/21/07, Tom Lane [EMAIL PROTECTED] wrote:
Jaime Casanova [EMAIL PROTECTED] writes:
On 1/21/07, Simon Riggs [EMAIL PROTECTED] wrote:
- continue on error i.e. COMMIT can/might succeed - though there are
still cases where it cannot, such as a serializable exception.
and what should be the
On Mon, 2007-01-22 at 09:25 -0500, Merlin Moncure wrote:
On 1/21/07, Tom Lane [EMAIL PROTECTED] wrote:
Jaime Casanova [EMAIL PROTECTED] writes:
On 1/21/07, Simon Riggs [EMAIL PROTECTED] wrote:
- continue on error i.e. COMMIT can/might succeed - though there are
still cases where it
On 1/22/07, Simon Riggs [EMAIL PROTECTED] wrote:
Could you post an example, just so we're all clear what the problems
are? I thought I understood what you are requesting; I may not.
ok,
The short version is I would like the ability to run some sql commands
and recover the transaction if an
There is currently a performance tip to run ANALYZE after a pg_dump
script has been restored.
ISTM that pg_dump should default to have ANALYZE statements for a table,
when both table definition and data are dumped. If only data (-a) or
only table definition (-s) is dumped the default should be to
Merlin Moncure [EMAIL PROTECTED] writes:
On 1/22/07, Simon Riggs [EMAIL PROTECTED] wrote:
Could you post an example, just so we're all clear what the problems
are? I thought I understood what you are requesting; I may not.
ok,
The short version is I would like the ability to run some sql
On Mon, Jan 22, 2007 at 10:40:37AM -0500, Merlin Moncure wrote:
The short version is I would like the ability to run some sql commands
snip
any error updating foo or bar will blow up the whole thing. Maybe
this is desirable, but it is often nice to be able to do some error
handling here.
Simon Riggs [EMAIL PROTECTED] writes:
ISTM that pg_dump should default to have ANALYZE statements for a table,
when both table definition and data are dumped. If only data (-a) or
only table definition (-s) is dumped the default should be to *not* add
an ANALYZE statement.
Having pg_dump emit
Simon Riggs wrote:
There is currently a performance tip to run ANALYZE after a pg_dump
script has been restored.
ISTM that pg_dump should default to have ANALYZE statements for a table,
when both table definition and data are dumped. If only data (-a) or
only table definition (-s) is dumped the
Simon Riggs [EMAIL PROTECTED] writes:
There doesn't seem to be any reason to skip the ANALYZE, but I'll
implement it as an option.
-z on | off
--analyze=on | off
Only an aesthetic comment:
Short options don't usually take on/off arguments, I would suggest making the
default be to
On Mon, 2007-01-22 at 10:46 -0500, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
On 1/22/07, Simon Riggs [EMAIL PROTECTED] wrote:
Could you post an example, just so we're all clear what the problems
are? I thought I understood what you are requesting; I may not.
ok,
The
Heikki Linnakangas wrote:
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case. Index(es) are typically only a
fraction of the
Bruce Momjian wrote:
Heikki Linnakangas wrote:
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case. Index(es) are typically only a
Simon Riggs [EMAIL PROTECTED] writes:
BEGIN;
stmt1;
stmt2; -- error
stmt3;
COMMIT;
results in stmt3 completing successfully even though stmt1 and stmt2 do
not == broken script.
stmt1 would still be completed successfully.
--
Gregory Stark
EnterpriseDB
On Mon, 2007-01-22 at 10:49 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
ISTM that pg_dump should default to have ANALYZE statements for a table,
when both table definition and data are dumped. If only data (-a) or
only table definition (-s) is dumped the default should be
On 1/22/07, Martijn van Oosterhout kleptog@svana.org wrote:
we're talking about psql, maybe all we need to do is create the
commands in psql:
\begin_nest
\commit_nest
\rollback_nest
That would work if we could rollback conditionally on failure (like
on_error_rollback but with definable
On Mon, 2007-01-22 at 16:11 +, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
BEGIN;
stmt1;
stmt2; -- error
stmt3;
COMMIT;
results in stmt3 completing successfully even though stmt1 and stmt2 do
not == broken script.
stmt1 would still be completed
On 1/22/07, Tom Lane [EMAIL PROTECTED] wrote:
I'm getting tired of repeating this, but: neither of you have said
anything that doesn't appear to me to be handled by ON_ERROR_ROLLBACK.
What exactly is lacking in that feature?
* the ability to span the savepoint across multiple statements.
* the
On Mon, Jan 22, 2007 at 11:21:12AM -0500, Merlin Moncure wrote:
\begin_nest
\commit_nest
\rollback_nest
That would work if we could rollback conditionally on failure (like
on_error_rollback but with definable beginning and ending points). I
Sorry, rollback conditionally on failure isn't
On 1/22/07, Tom Lane [EMAIL PROTECTED] wrote:
The short version is I would like the ability to run some sql commands
and recover the transaction if an error occurs.
I'm getting tired of repeating this, but: neither of you have said
anything that doesn't appear to me to be handled by
Heikki Linnakangas wrote:
Bruce Momjian wrote:
Heikki Linnakangas wrote:
Russell Smith wrote:
2. Index cleanup is the most expensive part of vacuum. So doing a
partial vacuum actually means more I/O as you have to do index cleanup
more often.
I don't think that's usually the case.
On Mon, 2007-01-22 at 12:18 -0500, Bruce Momjian wrote:
Heikki Linnakangas wrote:
In any case, for the statement Index cleanup is the most expensive part
of vacuum to be true, you're indexes would have to take up 2x as much
space as the heap, since the heap is scanned twice. I'm sure
Bruce Momjian [EMAIL PROTECTED] writes:
I agree it index cleanup isn't 50% of vacuum. I was trying to figure
out how small, and it seems about 15% of the total table, which means if
we have bitmap vacuum, we can conceivably reduce vacuum load by perhaps
80%, assuming 5% of the table is
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but vacuuming to relaim 3% three times seems like
it is
Bruce Momjian [EMAIL PROTECTED] writes:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but
On Mon, 2007-01-22 at 13:27 -0500, Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of
On 1/22/07, Martijn van Oosterhout kleptog@svana.org wrote:
I don't understand this either. Everything you can do with nested
transactions you can also do with savepoints, so I'm really not
understand what the limitations are?
Actually, looking at the savepoint documentation, it looks like
On 1/22/07, Dawid Kuroczko [EMAIL PROTECTED] wrote:
On 1/22/07, Tom Lane [EMAIL PROTECTED] wrote:
The short version is I would like the ability to run some sql commands
and recover the transaction if an error occurs.
I'm getting tired of repeating this, but: neither of you have said
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but vacuuming to relaim 3%
Merlin Moncure wrote:
On 1/22/07, Dawid Kuroczko [EMAIL PROTECTED] wrote:
one for each INSERT+UPDATE block. This way eiher both of them succeed
or fail, within one transaction.
i think so...Martijn said it best: you can 'rollback' to, but you
can't 'commit' to. The 'commit to' would be
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but vacuuming to relaim 3% three
Gregory Stark wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I agree it index cleanup isn't 50% of vacuum. I was trying to figure
out how small, and it seems about 15% of the total table, which means if
we have bitmap vacuum, we can conceivably reduce vacuum load by perhaps
80%, assuming 5%
On 1/22/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
i think so...Martijn said it best: you can 'rollback' to, but you
can't 'commit' to. The 'commit to' would be the arguably much more
useful way of disposing of a savepoint. But that should be taken up
with sql standards committee :(.
You
Hi,
This is how I think autovacuum should change with an eye towards being
able to run multiple vacuums simultaneously:
1. There will be two kinds of processes, autovacuum launcher and
autovacuum worker.
2. The launcher will be in charge of scheduling and will tell workers
what to do
3. The
Kenneth Marshall wrote:
On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
Hold that thought! Read Heikki's Piggyback VACUUM idea on new thread...
There may be other functions that could leverage a similar sort of
infrastructure. For example, a long DB mining query could be
Alvaro Herrera wrote:
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to 10% dirty? I realize the vacuum is going
to take a lot of time, but
Gregory Stark wrote:
Actually no. A while back I did experiments to see how fast reading a file
sequentially was compared to reading the same file sequentially but skipping
x% of the blocks randomly. The results were surprising (to me) and depressing.
The breakeven point was about 7%. [...]
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gregory Stark
Sent: maandag 22 januari 2007 19:41
To: Bruce Momjian
Cc: Heikki Linnakangas; Russell Smith; Darcy Buskermolen;
Simon Riggs; Alvaro Herrera; Matthew T. O'Connor; Pavan
Deolasee; Christopher
Alvaro Herrera wrote:
This is how I think autovacuum should change with an eye towards being
able to run multiple vacuums simultaneously:
[snip details]
Does this raise some red flags? It seems straightforward enough to me;
I'll submit a patch implementing this, so that scheduling will
Or so... :)
Thought I would do a poll of what is happening in the world for 8.3. I have:
Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions
Jonah Harris: WITH/Recursive Queries?
Andrei Kovalesvki: Some Win32
On 1/22/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Jonah Harris: WITH/Recursive Queries?
Yup, just talked with Bruce about this last week. Working on the design now.
--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood
On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
On Mon, 2007-01-22 at 13:27 -0500, Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if you have
a huge table, do you care to reclaim 3% of the table size, rather than
just vacuum it when it gets to
On Mon, 2007-01-22 at 13:41 +, Heikki Linnakangas wrote:
Any thoughts before I start experimenting?
Probably only to detail the various use cases we are discussing.
My thoughts on various use cases are:
- small table with frequent update/delete, heap and indexes all/mostly
cached
e.g.
On Mon, Jan 22, 2007 at 07:24:20PM +, Heikki Linnakangas wrote:
Kenneth Marshall wrote:
On Mon, Jan 22, 2007 at 06:42:09PM +, Simon Riggs wrote:
Hold that thought! Read Heikki's Piggyback VACUUM idea on new thread...
There may be other functions that could leverage a similar sort of
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I presume the reason for that is that the first paramater can be
qualified:
select pg_get_serial_sequence('public.FOO', 'Ff1');
Would someone explain why qualification makes us lowercase the first
parameter by default? I don't
Hello,
I'm trying to run pg_dump programmatically, and I'm wondering how I can send
it a password.
I'm running it with a system() call in a child process, and sending the
password in from the parent process on stdin, but somehow pg_dump always
finds out how to ask my terminal for a password.
On Mon, 22 Jan 2007, Joshua D. Drake wrote:
Or so... :)
Thought I would do a poll of what is happening in the world for 8.3. I have:
Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions
Jonah Harris:
shaunc wrote:
Hello,
I'm trying to run pg_dump programmatically, and I'm wondering how I can send
it a password.
I'm running it with a system() call in a child process, and sending the
password in from the parent process on stdin, but somehow pg_dump always
finds out how to ask my terminal for
On Mon, Jan 22, 2007 at 02:51:47PM +, Heikki Linnakangas wrote:
I've been looking at the way we do vacuums.
The fundamental performance issue is that a vacuum generates
nheapblocks+nindexblocks+ndirtyblocks I/Os. Vacuum cost delay helps to
spread the cost like part payment, but the
On Mon, Jan 22, 2007 at 12:17:39PM -0800, Ron Mayer wrote:
Gregory Stark wrote:
Actually no. A while back I did experiments to see how fast reading a file
sequentially was compared to reading the same file sequentially but skipping
x% of the blocks randomly. The results were surprising
On Mon, Jan 22, 2007 at 03:40:17PM +, Simon Riggs wrote:
This would add a table-specific ANALYZE statement following each table's
actions.
It'd probably be best to put it before any index creating activities,
since there's a better chance of everything from the table being in
shared
bruce wrote:
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Bruce Momjian wrote:
OK, maybe /doc or src/tools. A more radical approach would be to put
the list in our documentation, or have initdb install it.
Why not put it in techdocs or some such?
I think we've
On Mon, Jan 22, 2007 at 04:24:28PM -0300, Alvaro Herrera wrote:
4. Launcher will be a continuously-running process, akin to bgwriter;
connected to shared memory
So would it use up a database connection?
5. Workers will be direct postmaster children; so postmaster will get
SIGCHLD when
On Jan 22, 2007, at 11:16 AM, Richard Huxton wrote:
Bruce Momjian wrote:
Yep, agreed on the random I/O issue. The larger question is if
you have
a huge table, do you care to reclaim 3% of the table size, rather
than
just vacuum it when it gets to 10% dirty? I realize the vacuum is
Thought I would do a poll of what is happening in the world for 8.3. I have:
Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions
Jonah Harris: WITH/Recursive Queries?
Andrei Kovalesvki: Some Win32 work
On Mon, Jan 15, 2007 at 10:13:16AM -0500, Andrew Dunstan wrote:
Neil Conway wrote:
On Thu, 2007-01-11 at 14:36 -0500, Neil Conway wrote:
I don't think they need to be integrated any time soon, but if we were
to design pg_dump and pg_dumpall from scratch, it seems more logical to
use a
elein wrote:
On Mon, Jan 15, 2007 at 10:13:16AM -0500, Andrew Dunstan wrote:
Neil Conway wrote:
On Thu, 2007-01-11 at 14:36 -0500, Neil Conway wrote:
I don't think they need to be integrated any time soon, but if we were
to design pg_dump and pg_dumpall from scratch, it seems more logical
Michael Meskes [EMAIL PROTECTED] wrote:
There is a file 'compat_informix-dec_test-OpenBSD3.8.broken.stdout'
under the path 'src/interfaces/ecpg/test/extedted/' in the recent
ftp snapshot (postgresql-snapshot.tar.gz).
Would you please care to explain? I do not understand what you are
Would it be useful to have an SPI function that returns the OID and
namespace of the function being executed?
The reason I bring this up is due to a discussion on the PostGIS lists
about making the installation able to work in any namespace from one
generic SQL script.
The problem they have is
On Mon, 2007-01-22 at 14:16 -0800, Joshua D. Drake wrote:
I am sure there are more, the ones with question marks are unknowns but
heard of in the ether somewhere. Any additions or confirmations?
I'd still like to make an attempt at my Synchronized Scanning patch.
If freeze is 10 weeks away,
Joshua D. Drake [EMAIL PROTECTED] wrote:
Thought I would do a poll of what is happening in the world for 8.3. I have:
Alvaro Herrera: Autovacuum improvements (maintenance window etc..)
Gavin Sherry: Bitmap Indexes (on disk), possible basic Window functions
Jonah Harris: WITH/Recursive
* Joshua D. Drake ([EMAIL PROTECTED]) wrote:
Thought I would do a poll of what is happening in the world for 8.3. I have:
It seems unlikely that I'm going to have time at the rate things are
going but I was hoping to take a whack at default permissions/ownership
by schema. Kind of a umask-type
* Joshua D. Drake ([EMAIL PROTECTED]) wrote:
Thought I would do a poll of what is happening in the world for 8.3. I have:
Another thing which was mentioned previously which I'd really like to
see happen (and was discussed on the list...) is replacing the Kerberos
support with GSSAPI support and
Greetings,
* Stephen Frost ([EMAIL PROTECTED]) wrote:
It seems unlikely that I'm going to have time at the rate things are
going but I was hoping to take a whack at default permissions/ownership
by schema. Kind of a umask-type thing but for schemas instead of roles
(though I've thought about
Heikki Linnakangas [EMAIL PROTECTED] wrote:
Vacuum is done in 3 phases:
1. Scan heap
2. Vacuum index
3. Vacuum heap
A variation of the scheme would be to keep scanning pages that are in
cache, until the tid list reaches a predefined size, instead of keeping
track of which pages have
We are trying to develop the updateable cursors functionality into
Postgresql. I have given below details of the design and also issues we are
facing. Looking forward to the advice on how to proceed with these issues.
Rgds,
Arul Shaji
1. Introduction
--
This is a combined
FAST PostgreSQL wrote:
We are trying to develop the updateable cursors functionality into
Postgresql. I have given below details of the design and also issues we are
facing. Looking forward to the advice on how to proceed with these issues.
Rgds,
Arul Shaji
Would this be something that
Jim C. Nasby wrote:
On Mon, Jan 22, 2007 at 04:24:28PM -0300, Alvaro Herrera wrote:
4. Launcher will be a continuously-running process, akin to bgwriter;
connected to shared memory
So would it use up a database connection?
No. It's connected to shared memory and has access to pgstats,
On Tue, 23 Jan 2007 15:48, Joshua D. Drake wrote:
FAST PostgreSQL wrote:
We are trying to develop the updateable cursors functionality into
Postgresql. I have given below details of the design and also issues we
are facing. Looking forward to the advice on how to proceed with these
FAST PostgreSQL wrote:
On Tue, 23 Jan 2007 15:48, Joshua D. Drake wrote:
FAST PostgreSQL wrote:
We are trying to develop the updateable cursors functionality into
Postgresql. I have given below details of the design and also issues we
are facing. Looking forward to the advice on how to
Stephen Frost [EMAIL PROTECTED] writes:
Following up on my reply to Joshua, what I'd like to propose is, for
comments and suggestions:
ALTER SCHEMA name [ [ WITH ] [ DEFAULT ] option [ ... ] ]
where option can be:
{ GRANT { { SELECT | INSERT | UPDATE | DELETE | RULE |
Joshua D. Drake wrote:
Great! I will put it on my, Remember to bug Arul list :)
Hey Joshua,
could you put this stuff here:
http://developer.postgresql.org/index.php/Todo:WishlistFor83
I will try to find some time during this week (likely on the weekend) to
also try and figure out if these
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Jan 22, 2007 at 03:40:17PM +, Simon Riggs wrote:
This would add a table-specific ANALYZE statement following each table's
actions.
It'd probably be best to put it before any index creating activities,
No, because then you'd fail to
Lukas Kahwe Smith wrote:
Joshua D. Drake wrote:
Great! I will put it on my, Remember to bug Arul list :)
Hey Joshua,
could you put this stuff here:
http://developer.postgresql.org/index.php/Todo:WishlistFor83
Sure if you bother to unlock the page for me ;)
I will try to find some
On Mon, Jan 22, 2007 at 05:51:53PM +, Gregory Stark wrote:
Actually no. A while back I did experiments to see how fast reading a file
sequentially was compared to reading the same file sequentially but skipping
x% of the blocks randomly. The results were surprising (to me) and depressing.
85 matches
Mail list logo