Mark Hills writes:
> I hadn't realised that sub-queries restrict the planner so much. Although
> at some point I've picked up a habit of avoiding them, presumably for this
> reason.
> If you have time to explain, I'd be interested in a suggestion for any
> change to the planner that could make
On Sun, 26 Feb 2012, Tom Lane wrote:
> Mark Hills writes:
> > What is that prevents the index condition from being used in earlier parts
> > of the query? Only where a single condition is present is it be used below
> > the final join.
>
> "WHERE job.jid IN (1234)" is simplified to "WHERE job.
you might also consider increasing the wal_buffers value if it's still the
default (64KB)
BR
yup there is. the parameter checkpoint_segments does not require a restart
of the server, just a reload :)
On Fri, Feb 24, 2012 at 12:54 AM, Alessandro Gagliardi
wrote:
> On Thu, Feb 23, 2012 at 1:11 PM, Peter van Hardenberg wrote:
>
>> My hunch is still that your issue is lock contention.
>>
>>
On Mon, Feb 27, 2012 at 2:13 PM, Samuel Gendler
wrote:
>
>
> On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner
> wrote:
>>
>>
>> So for now, we'll just try to DELETE faster than we INSERT, and combined
>> with autovacuum, I'm hoping that this crisis will be averted. That said,
>> the current sta
On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner wrote:
>
> So for now, we'll just try to DELETE faster than we INSERT, and combined
> with autovacuum, I'm hoping that this crisis will be averted. That said,
> the current state of affairs with these machines is pretty fragile, and I
> think that
On Mon, Feb 27, 2012 at 12:01 PM, Shaun Thomas wrote:
>
> Like I said, this will stop his tables from growing further so long as he
> keeps his maintenance functions running regularly from now on, but the
> existing rows he's trying to delete will never go away until he runs a
> CLUSTER or some ot
Cherio writes:
> This statement takes over 6 times longer than a similar statement against
> the same table except the join is done by a integer field, e.g.
Could we see EXPLAIN ANALYZE data for both cases?
How are you representing the UUIDs, exactly (ie what's the column data
type)?
On 02/27/2012 08:59 AM, Reuven M. Lerner wrote:
From what I understand, the issue isn't one of current disk space,
but rather of how quickly the disk space is being used up.
Noted. Just keep in mind that dead rows are not free. In the case of
sequence scans, the rows still have to be read fro
On 02/27/2012 08:53 AM, Andrew Dunstan wrote:
If he has autovacuum on he could well be just fine with his proposed
strategy. Or he could have tables partitioned by time and do the delete
by just dropping partitions. There are numerous way he could get this to
work.
He isn't using partitions th
Hi, Shaun. You wrote:
In the end, it was agreed that we could execute the deletes over
time, deleting items in the background, or in parallel with the
application's work. After all, if the disk is filling up at the rate
of 2 GB/day, then so long as we delete 4 GB/day (which is pretty easy
to d
On 02/27/2012 09:45 AM, Shaun Thomas wrote:
On 02/27/2012 02:08 AM, Reuven M. Lerner wrote:
In the end, it was agreed that we could execute the deletes over
time, deleting items in the background, or in parallel with the
application's work. After all, if the disk is filling up at the rate
of
On Fri, Feb 24, 2012 at 4:46 PM, Cherio wrote:
> We are experiencing an unusual slowdown when using UUID field in JOIN when
> updating a table. SQL looks like this:
>
> UPDATE dst
> SET data_field = src.data_field
> FROM src
> WHERE dst.uuid_field = src.uuid_field;
>
> This statement takes over 6
On 02/27/2012 02:08 AM, Reuven M. Lerner wrote:
In the end, it was agreed that we could execute the deletes over
time, deleting items in the background, or in parallel with the
application's work. After all, if the disk is filling up at the rate
of 2 GB/day, then so long as we delete 4 GB/day (w
I have a question.
Your data is growing 1Gb by 1 day.
Can we use another Disk or partition to continue archive data ?
I mean, do postgreSql support a Layering System for archive data ?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Very-long-deletion-time-on-a-200-GB-d
Le dimanche 26 février 2012 01:16:08, Stefan Keller a écrit :
> Hi,
>
> 2011/10/24 Stephen Frost wrote
>
> > Now, we've also been discussing ways to have PG automatically
> > re-populate shared buffers and possibly OS cache based on what was in
> > memory at the time of the last shut-down, but I
We are experiencing an unusual slowdown when using UUID field in JOIN when
updating a table. SQL looks like this:
UPDATE dst
SET data_field = src.data_field
FROM src
WHERE dst.uuid_field = src.uuid_field;
This statement takes over 6 times longer than a similar statement against
the same table exc
There are many approach for PostgreSQL in-memory.
The quick and easy way is making slave pgsql run on persistent RAM filesystem,
the slave is part of master/slave replication cluster.
The fstab and script make RAM file system persistent is below:
Setup:
First, create a mountpoint for the disk :
Hi, everyone. I wanted to thank you again for your help on the huge
delete problem that I was experiencing.
After a lot of trial and error, we finally came to the conclusion that
deleting this much data in the time frame that they need, on
underpowered hardware that is shared with an applicat
19 matches
Mail list logo