n is totally empy, but also contains all indexes. Which means 0B
table zise, and 140kB index size.
Does anyone has an idea why in this case the vacuum/analyze takes almost as
long on the parent table as on the biggest child table? (the other child
tables are smaller than f_tranaction_1, and their vacuum/analyze time is
much shorter).
wkr,
Bert
urs,
Laurenz Albe
On Wed, Jan 23, 2013 at 1:38 PM, Albe Laurenz wrote:
> Bert wrote:
> > I wrote a script to make sure all tables are vacuumed and analyzed every
> evening. This works very
> > well.
>
> Autovacuum doesn't do the job for you?
>
> That would save
ments, that's the main reason we want to control it
with postgres too.
I am still wondering why the children need to be analyzed, if we
vacuum/analyze the childs seperatly.
but thank you for giving me some clarification.
cheers,
Bert
On Wed, Jan 23, 2013 at 3:40 PM, Rural Hunter wrote:
>
e knowlegde too! :)
cheers,
Bert
On Wed, Jan 23, 2013 at 4:45 PM, Prashanth Ranjalkar <
prashant.ranjal...@gmail.com> wrote:
> *Hi Bert,*
> **
> *Vaccum analyze operation would be a time consuming activity when it
> operates on partitioned table in parent and child relationship
weaking and tuning than the average dwh. Because we
are handling both very complex queries and star schemes, and still a high
insert / update load. pretty much all the time.
But that's where the challenge is, I guess.
cheers,
Bert
--
Bert Desmet
0477/305361
)
Does anyone has an idea what happened exactly?
wkr,
Bert
--
Bert Desmet
0477/305361
Hi Tom,
thanks for the tip! it was indeed the oom killer.
Is it wise to disable the oom killer? Or will the server really go down
withough postgres doing something about it?
currently I already lowered the shared_memory value a bit..
cheers,
Bert
On Tue, Apr 2, 2013 at 8:06 PM, Tom Lane
Hi all,
I have turned vm.overcommit_memory on 1.
It's a pretty much dedicated machine anyway, except for some postgres
maintainance scripts I run in python / bash from the server.
We'll see what it gives.
cheers,
Bert
On Wed, Apr 3, 2013 at 8:45 AM, Bert wrote:
> Hi Tom,
>
x27;t need a lot of connections. But I want to
process a lot of data fast.
cheers,
Bert
On Wed, Apr 3, 2013 at 10:10 AM, Bert wrote:
> Hi all,
>
> I have turned vm.overcommit_memory on 1.
>
> It's a pretty much dedicated machine anyway, except for some postgres
> maintain
aha, ok. This was a setting pg_tune sugested. But I can understand how that
is a bad idea.
wkr,
Bert
On Thu, Apr 4, 2013 at 8:17 AM, Tom Lane wrote:
> Bert writes:
> > These are my memory settings:
> > work_mem = 4GB
>
> > How is it possible that one connection (quer
Hell**o Yuval,
I would suggest to buy a third server if you want to do synchronous
replication.
However, if your network is broken, I would first fix the network. :)
cheers,
Bert
On Thu, Apr 18, 2013 at 6:13 PM, Scott Ribe wrote:
> On Apr 18, 2013, at 9:40 AM, Sofer, Yuval wrote:
>
Hello,
I was just wondering: Is it possible to restore a specific table from an
online backup?
Or is it only possible if we first restore the backup, replay all the logs,
and then take the table files?
wkr,
Bert Desmet
already did this 2 times), but this time it took over
8hours, while the table did not grow significantly.
Does anyone has an idea what could have triggered the vacuum go this slow?
wkr,
Bert
--
Bert Desmet
13 matches
Mail list logo