Heikki Linnakangas wrote:
> Peter Schuller wrote:
>> to have a slow background process (similar to normal non-full vacuums
> ...
> I think it's doable, if you take a copy of the tuple, and set the ctid
> pointer on the old one like an UPDATE, and wait until the old tuple is
> no longer visible to
Arctic Toucan <[EMAIL PROTECTED]> writes:
> -- Is there something magical about the hash aggregate estimate of 200 rows?
Yeah, it's the default :-(
> Is this a bug, or some subtlety of the Postgres query planner?
It's an, um, known deficiency --- the planner hasn't got any idea how to
construct
Peter Schuller wrote:
> I have only looked very very briefly at the PG code so I don't know
> how far fetched it is, but my thought was that it should be possible
> to have a slow background process (similar to normal non-full vacuums
> nows) that would, instead of registering dead tuples in the FS
Hello,
I was wondering whether any thought has previously been given to
having a non-blocking "vacuum full", in the sense of space reclamation
and table compactation.
The motivation is that it is useful to be able to assume that
operations that span a table will *roughtly* scale linearly with the
In keeping with some of the recent threads regarding the planner...
I have a fair sized data warehouse in which I am trying to perform an
aggregation, but getting OOM errors in Postgres(8.2.4).
I believe the reason for the OOM is that Postgres is attempting to do a hash
aggregation, but it has
> > Hmm - why is it doing that?
>
> I'm betting that the OP's people.uid column is not an integer. Existing
> PG releases can't use hashed subplans for cross-data-type comparisons
> (8.3 will be a bit smarter).
*light bulb* Ahhh, that's it. So, I guess the solution is either
to cast the colum
On top of what Vivek said, you need to update your pg install. 7.4.2
had a few data eating bugs if I remember correctly. 7.4 branch is up
to 7.4.18, and those are a lot of bug fixes (2+ years) you're missing.
If one of those bugs eats your data, don't expect any sympathy.
-
On Sep 28, 2007, at 10:28 AM, Radhika S wrote:
20775 ?S 0:00 postgres: abc myDB [local] idle in
transaction
20776 ?S 0:00 postgres: abc myDB [local] idle
17509 ?S 0:06 postgres: abc myDB [local] VACUUM
waiting
24656 ?S 0:00
Tom Lane wrote:
Richard Huxton <[EMAIL PROTECTED]> writes:
Hmm - why is it doing that?
I'm betting that the OP's people.uid column is not an integer. Existing
PG releases can't use hashed subplans for cross-data-type comparisons
(8.3 will be a bit smarter).
Looked like an int2 to me (width=
Hi -
This has been happening more recently. Our database hangs after a VACUUM
and is unresponsive when we come in next morning.
The vacuum job runs at 03:00 am daily.
The command is : /usr/local/pgsql/bin/vacuumdb --full -d DbName
Also, what exactly does this mean VACUUM waiting. Is there a rea
Richard Huxton <[EMAIL PROTECTED]> writes:
> Hmm - why is it doing that?
I'm betting that the OP's people.uid column is not an integer. Existing
PG releases can't use hashed subplans for cross-data-type comparisons
(8.3 will be a bit smarter).
regards, tom lane
-
On 9/27/07, Kevin Kempter <[EMAIL PROTECTED]> wrote:
> Hi All;
>
> I'm preparing to fire up WAL archiving on 8 production servers We will follow
> up with implementing a warm standby scenariio.
>
> Does anyone have any thoughts per how to maximize performance, yet minimize
> the potential for data
> Just an idea, but with the 8.3 concurrent scan support would it be
> possible to hang a more in depth analyze over exisiting sequential
> scans. Then it would be a lower cost to have higher resolution in
> the statistics because the I/O component would be hidden.
The biggest problem with that is
On Thu, 2007-09-27 at 11:07 -0700, Ron Mayer wrote:
> Csaba Nagy wrote:
> >
> > Well, my problem was actually solved by rising the statistics target,
>
> Would it do more benefit than harm if postgres increased the
> default_statistics_target?
>
> I see a fair number of people (myself included)
Peter Koczan wrote:
Hello,
I have a weird performance issue with a query I'm testing. Basically,
I'm trying to port a function that generates user uids, and since
postgres offers a sequence generator function, I figure I'd take
advantage of that. Basically, I generate our uid range, filter out
t
15 matches
Mail list logo