Hi,
I already made a discussion about it. We can view the Logical I/Os. If
we enable the log_statement_stats in the conf file and apply the following
patch, it is possible. But putting it in Explain analyze makes more sense to
me.
*** postgresql-8.3beta1/src/backend/storage/buffer/bufmgr.c
NikhilS <[EMAIL PROTECTED]> writes:
> Any errors which occur before doing the heap_insert should not require
> any recovery according to me.
A sufficient (though far from all-encompassing) rejoinder to that is
"triggers and CHECK constraints can do anything".
> The overhead of having a subtransac
Gregory Stark <[EMAIL PROTECTED]> writes:
>> On Dec 14, 2007 6:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> How do people feel about applying this to 8.3, rather than holding it?
> I think it would have been better to apply before beta. We would have found
> out if users were going to complain a
Hi,
>
> Another approach would be to distinguish between errors that require a
> subtransaction to recover to a consistent state, and less serious errors
> that don't have this requirement (e.g. invalid input to a data type
> input function). If all the errors that we want to tolerate during a
> b
"Jonah H. Harris" <[EMAIL PROTECTED]> writes:
> On Dec 14, 2007 6:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> How do people feel about applying this to 8.3, rather than holding it?
>> One possible objection is that we're past string freeze, but I noted
>> Peter doing some message editorializing
Josh Berkus <[EMAIL PROTECTED]> writes:
> There's no way we can do a transactionless load, then? I'm thinking of the
> load-into-new-partition which is a single pass/fail operation. Would
> ignoring individual row errors in for this case still cause these kinds of
> problems?
Given that COPY
On Friday 2007-12-14 16:22, Tom Lane wrote:
> Neil Conway <[EMAIL PROTECTED]> writes:
> > By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
> > to drop (and log) rows that contain malformed data. That is, rows with
> > too many or too few columns, rows that result in constraint
On Fri, 2007-12-14 at 15:47 +, Gregory Stark wrote:
> I've wanted for a long time to have EXPLAIN ANALYZE output per-node I/O usage.
> This would be especially useful if we could distinguish hardware versus
> logical I/O though. And I always thought that would be very hard.
>
> My thought in t
Tom,
> I think such an approach is doomed to hopeless unreliability. There is
> no concept of an error that doesn't require a transaction abort in the
> system now, and that doesn't seem to me like something that can be
> successfully bolted on after the fact. Also, there's a lot of
> bookkeepin
Neil Conway <[EMAIL PROTECTED]> writes:
> One approach would be to essentially implement the pg_bulkloader
> approach inside the backend. That is, begin by doing a subtransaction
> for every k rows (with k = 1000, say). If you get any errors, then
> either repeat the process with k/2 until you loca
On Fri, 2007-12-14 at 18:22 -0500, Tom Lane wrote:
> If we could somehow only do a subtransaction per failure, things would
> be much better, but I don't see how.
One approach would be to essentially implement the pg_bulkloader
approach inside the backend. That is, begin by doing a subtransaction
On Dec 14, 2007 6:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> How do people feel about applying this to 8.3, rather than holding it?
> One possible objection is that we're past string freeze, but I noted
> Peter doing some message editorializing as recently as today, so it
> would seem a slushy fr
Simon Riggs <[EMAIL PROTECTED]> writes:
> I even found an existing, unused error message called
> ERRCODE_INVALID_LIMIT_VALUE
That's a bad idea I think. That code is defined by SQL99. I can't find
anyplace that they specify what it should be raised for, but we can be
pretty confident that it's n
Neil Conway <[EMAIL PROTECTED]> writes:
> By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
> to drop (and log) rows that contain malformed data. That is, rows with
> too many or too few columns, rows that result in constraint violations,
> and rows containing columns where the
Neil Conway wrote:
On Fri, 2007-12-14 at 14:48 +0200, Hannu Krosing wrote:
How did you do it ?
Did you enchance COPY command or was it something completely new ?
By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
to drop (and log) rows that contain malformed data
On Fri, 2007-12-14 at 14:48 +0200, Hannu Krosing wrote:
> How did you do it ?
>
> Did you enchance COPY command or was it something completely new ?
By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
to drop (and log) rows that contain malformed data. That is, rows with
too ma
On Thu, Dec 13, 2007 at 11:31:17PM -0500, Merlin Moncure wrote:
>
> for historical record, this comment (subject not directly related to
> the OP) was probably this:
> http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg62562.html
Bingo. Thanks!
A
---(end of br
On Fri, Dec 14, 2007 at 09:02:04AM +, Gregory Stark wrote:
>
> Oh, and incidentally the problem with WARNING is that this is DML which could
> potentially be executing hundreds or thousands of times per minute. A WARNING
> is effectively an ERROR.
Good point. Also, the sort of case where you
- Forwarded message from Adriaan van Os <[EMAIL PROTECTED]> -
From: Adriaan van Os <[EMAIL PROTECTED]>
To: Alvaro Herrera <[EMAIL PROTECTED]>
Date: Fri, 14 Dec 2007 17:00:19 +0100
Subject: Getting multiple values from a sequence generator
Message-ID: <[EMAIL PROTECTED]>
Following up on a
I've wanted for a long time to have EXPLAIN ANALYZE output per-node I/O usage.
This would be especially useful if we could distinguish hardware versus
logical I/O though. And I always thought that would be very hard.
My thought in the past was that would could do it on Solaris by having
Postgres
Magnus Hagander wrote:
On Fri, Dec 14, 2007 at 03:39:14PM +, Dave Page wrote:
Andrew Dunstan wrote:
Writing and calling a temp .bat file might be yucky - having to keep two
environment files is a lot more yucky, IMNSHO, and we shouldn't make
people do it.
+1
Ok, I g
Magnus Hagander wrote:
> On Thu, Dec 13, 2007 at 09:55:33AM -0300, Alvaro Herrera wrote:
> >
> > Many of these are nonsensical -- we know this is not a device, nor
> > network access. Still there is more than one possibility, and I don't
> > know which ones should be really acceptable in this con
On Fri, Dec 14, 2007 at 03:39:14PM +, Dave Page wrote:
> Andrew Dunstan wrote:
> > Writing and calling a temp .bat file might be yucky - having to keep two
> > environment files is a lot more yucky, IMNSHO, and we shouldn't make
> > people do it.
>
> +1
Ok, I guess I'm outvoted ;-) I don't ca
Andrew Dunstan wrote:
> Writing and calling a temp .bat file might be yucky - having to keep two
> environment files is a lot more yucky, IMNSHO, and we shouldn't make
> people do it.
+1
/D
---(end of broadcast)---
TIP 1: if posting/reading through
On Fri, 14 Dec 2007, Zeugswetter Andreas ADI SD wrote:
I don't follow. The problem is not writes but reads. And if the reads
are random enough no cache controller will help.
The specific example Tom was running was, in his words, "100% disk write
bound". I was commenting on why I thought tha
Magnus Hagander wrote:
Well, I honestly think we can live with it for one cycle. As soon as 8.4
opens I'll get to work converting these .bat files to pure one line
wrappers.
Ok. We obviously don't agree on what to do here, so let's open it up for
somebody
else to comment on
The docs contain the following example of an archive_command for use
with standalone hot backup:
test -f /var/lib/pgsql/backup_in_progress && cp -i %p
/var/lib/pgsql/archive/%f
Unfortunately, as I found out when I tried it, this command returns a
non-zero (on Linux/bash) when the backup
On Fri, 2007-12-14 at 14:41 +, Simon Riggs wrote:
> On Thu, 2007-12-13 at 22:23 -0800, Neil Conway wrote:
> so here's a patch.
minor correction
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
Index: src/backend/executor/nodeLimit.c
On Thu, Dec 13, 2007 at 09:55:33AM -0300, Alvaro Herrera wrote:
> Alvaro Herrera wrote:
>
> > So _anything_ could be EINVAL. Including the several cases that
> > _dosmaperr treat as EACCES. So I'm afraid that for this experiment to
> > be successful, we would have to remove not only the EINVAL c
On Thu, 2007-12-13 at 22:23 -0800, Neil Conway wrote:
> On Thu, 2007-12-13 at 22:06 -0500, Tom Lane wrote:
> > I guess that on purely philosophical grounds, it's not an unreasonable
> > behavior. For example, "LIMIT n" means "output at most n tuples",
> > not "output exactly n tuples". So when it
Hannu Krosing wrote:
until N fubbers used
..whatever a fubber is :-)
Nice typo!
Markus
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hello Hannu,
Hannu Krosing wrote:
(For parallelized queries, superuser privileges might appear wrong, but
I'm arguing that parallelizing the rights checking isn't worth the
trouble, so the initiating worker backend should do that and only
delegate safe jobs to hepler backends. Or is that a ser
Ühel kenal päeval, T, 2007-12-11 kell 15:41, kirjutas Neil Conway:
> On Tue, 2007-12-11 at 10:53 -0800, Josh Berkus wrote:
> > Just so you don't lose sight of it, one of the biggest VLDB features we're
> > missing is fault-tolerant bulk load.
>
> I actually had to cook up a version of this for T
Ühel kenal päeval, N, 2007-12-13 kell 20:25, kirjutas Heikki
Linnakangas:
...
> Hmm. That assumes that nothing else than the WAL replay will read
> pages into shared buffers. I guess that's true at the moment, but it
> doesn't seem impossible that something like Florian's read-only queries
> on
Ühel kenal päeval, R, 2007-12-14 kell 10:39, kirjutas Markus
Schiltknecht:
> Hi,
>
> (For parallelized queries, superuser privileges might appear wrong, but
> I'm arguing that parallelizing the rights checking isn't worth the
> trouble, so the initiating worker backend should do that and only
On Fri, 2007-12-14 at 10:51 +0100, Zeugswetter Andreas ADI SD wrote:
> The problem is not writes but reads.
That's what I see.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 1: if posting/reading through
[Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large
pg_catalog.pg_largeobject table.]
Simon Riggs wrote:
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed
ANALYZE:
INFO: analyzing "pg_catalog.pg_larg
> > Note that even though the processor is 99% in wait state the drive
is
> > only handling about 3 MB/s. That translates into a seek time of
2.2ms
> > which is actually pretty fast...But note that if this were a raid
array
> > Postgres's wouldn't be getting any better results. A Raid array
wou
Hi,
Alvaro Herrera wrote:
Simon Riggs wrote:
ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
I also see similarity to w
On Thu, 2007-12-13 at 22:06 -0500, Tom Lane wrote:
> Hmm ... I don't recall much either. The code in nodeLimit.c just
> silently replaces a negative input value by zero. It'd certainly be
> possible to make it throw an error instead, but what the downsides of
> that might be aren't clear.
>
> I
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Jonah H. Harris" <[EMAIL PROTECTED]> writes:
>> Don't we have any similar usability cases in the system like this,
>> where negatives are not allowed only for the sake of it being an
>> insane setting? I'm tired, but I thought we did.
>
> Yeah, probably.
41 matches
Mail list logo