You could create a statement level delete trigger on the relevant table,
then get it to snapshot all the contents of the pg_stat_activity table
(which will show all running queries) into some sort of log table.
If you look at the plpgsql documentation (part of the postgresql manual
for your releas
Le 31/03/2010 08:41, Gnanakumar a écrit :
> Hi,
>
>
>
> Even though I've enabled statistics collector in our server, it is not
> collecting statistics, and because of this autovacuum is also not running as
> expected.
>
>
>
> PostgreSQL version 8.2
>
>
>
> Parameters enabled related to
Tino,
I did try mirroring the disk using 'dd' command and it took me a long time.
At the end postgres did not start up.
I am sure I could probably get postgres working, but the db integrity not sure
about that.
Thank you very much for the input and offer
Renato
Renato Oliveira
Systems Admin
Greg,
Thank you very much for your input.
I agree with you and I do understand where you are coming from.
I do agree that in order to transition without a noticeable downtime the
application would need to be built for that.
Which one works best: bucardo, slony or Londiste?
I have researched Sl
On Wed, Mar 31, 2010 at 1:43 AM, Renato Oliveira
wrote:
> Greg,
>
> Thank you very much for your input.
> I agree with you and I do understand where you are coming from.
>
> I do agree that in order to transition without a noticeable downtime the
> application would need to be built for that.
>
>
Hi Iñigo,
Thank you for your input, really appreciated.
I just had a thought; if I backup ‘pg_dump’ full database, then restore to my
new machine new postgres 8.4, which one of these programs would work best to do
the migration,
Slony, Bucardo or Londiste?
I would like to say that we did have
Greg,
I am going to gather the figures about our database and I will email to the
list, if I am allowed to.
Number of tables, number of transactions per day etc.
Nothing on our db servers are optimized, hardware wise, the db is on the same
volume as logs as the os.
I know we have an IO problem
On Tue, Mar 30, 2010 at 10:07:54PM -0400, Dai, Tino wrote:
> I'm not a dba. I'm a sysadmin by training. Is there some way to
> mirror the disks at the OS level? And then move it to the new
> machine. Just a though, I don't know the exact steps. But if you
> are interested,
Im also seeing a weird thing. If I do show stats_start_collector; inside
psql, it says off.
Ive also restarted my server many times after making all the changes.
As per 8.2 doc,
http://www.postgresql.org/docs/8.2/interactive/runtime-config-statistics.htm
l, stats_start_collector is on by d
Le 31/03/2010 10:58, Gnanakumar a écrit :
> I’m also seeing a weird thing. If I do ‘show stats_start_collector;’ inside
> psql, it says ‘off’.
>
> I’ve also restarted my server many times after making all the changes.
>
> As per 8.2 doc,
> http://www.postgresql.org/docs/8.2/interactive/runtime-c
OK. I'll try that way.
But, as per 8.2 doc, by default it is ON. So if I comment with just '#'
infront of the parameter, and restart PostgreSQL, this change will not be
reflected, which means it should work with default value ON?
-Original Message-
From: Guillaume Lelarge [mailto:guilla.
Le 31/03/2010 11:14, Gnanakumar a écrit :
> OK. I'll try that way.
>
> But, as per 8.2 doc, by default it is ON. So if I comment with just '#'
> infront of the parameter, and restart PostgreSQL, this change will not be
> reflected, which means it should work with default value ON?
Yes, you're ri
Hi!
I'm searching for any good OLAP-like benchmark: I need some benchmark
with complex selects run on 10-30G dataset, something like this:
http://www.percona.com/docs/wiki/benchmark:wikistat:start, but this is
only a draft and would only be released for mysql.
I've found DBT-3 benchmark, but it l
Thanks to all that helped, I've eventually solved it, I set up a cron job to
monitor the table every minute to help me narrow down the time frame when
the event happened - the thought of scanning 20+Gb log files for the 6hr
period it happened again didn't fill me with joy. I also added
log_sta
After restarting PostgreSQL, I can see the following parameters are set to
ON:
mydb=# show stats_start_collector;
stats_start_collector
---
on
(1 row)
mydb=# show stats_start_collector;
stats_start_collector
---
on
(1 row)
mydb=# show stats_block_level;
Rodger Donaldson wrote:
On Tue, March 30, 2010 06:09, Greg Smith wrote:
You answered your own question here. Ramiro is looking for suggestions
for how to scale up to >500 connections at once, and it's not that
likely virtualization can fill any useful role in that context.
That rather
Vladimir Rusinov wrote:
I'm searching for any good OLAP-like benchmark: I need some benchmark
with complex selects run on 10-30G dataset, something like this:
http://www.percona.com/docs/wiki/benchmark:wikistat:start, but this is
only a draft and would only be released for mysql.
I already i
Le 31/03/2010 12:51, Gnanakumar a écrit :
> After restarting PostgreSQL, I can see the following parameters are set to
> ON:
> mydb=# show stats_start_collector;
> stats_start_collector
> ---
> on
> (1 row)
>
> mydb=# show stats_start_collector;
> stats_start_collector
> ---
On Wed, 2010-03-31 at 09:22 +0100, Renato Oliveira wrote:
> Greg,
>
> I am going to gather the figures about our database and I will email to the
> list, if I am allowed to.
> Number of tables, number of transactions per day etc.
Absolutely allowed, and encouraged.
Also, do you have long runni
Zitat von Greg Smith :
I've linked to everything I'm aware of at
http://wiki.postgresql.org/wiki/Category:Benchmarking and the TPC-H
page has probably the most relevant information for OLAP.
PostgreSQL doesn't do particularly well on OLAP benchmarks yet due
to how queries are limited by
show stats_row_level is also showing ON.
mydb=# show stats_row_level;
stats_row_level
-
on
(1 row)
I also ran "select * from pg_stat_user_tables", all columns are showing
either 0 or NULL value for all the tables in my database. I don't find any
values other than this for the en
Suppose I have 'stupid' database with just one tables, like this:
CREATE TABLE messages (
message_id uuid NOT NULL PRIMARY KEY,
message_time_created timestamp with time zone NOT NULL,
message_phone_number character varying NOT NULL,
message_state type_some_state_enu
"Gnanakumar" writes:
>> But still, if I query after performing heavy DML operations, I'm always
>> getting zero, even though there are INSERT/UPDATE/DELETE.
Is the stats collector actually running according to "ps"? If not, look
in the postmaster log to see why it failed (problems creating the
c
I'm also seeing a weird thing. If I do 'show stats_start_collector;' inside
psql, it says 'off'.
I've also restarted my server many times after making all the changes.
As per 8.2 doc,
http://www.postgresql.org/docs/8.2/interactive/runtime-config-statistics.htm
l, stats_start_collector is o
Renato Oliveira wrote:
I am going to gather the figures about our database and I will email to the
list, if I am allowed to.
Number of tables, number of transactions per day etc.
A quick snapshot from "vmstat 1" during a busy time is also very helpful.
--
Greg Smith 2ndQuadrant US Baltim
> -Original Message-
> From: Mario Splivalo [mailto:mario.spliv...@megafon.hr]
> Sent: Wednesday, March 31, 2010 10:20 AM
> To: pgsql-admin@postgresql.org
> Subject: Live sort-of-'warehousing' database how-to?
>
> Suppose I have 'stupid' database with just one tables, like this:
>
> CR
All,
How does one tell how much space is available (probably by table), after
a
"vacuum verbose analyze" has completed. As I understand it, the emptied
space will still be only available to the table that originally housed
it.
We are running Postgresql 8.3 in various Solaris and Linux envir
"Gnanakumar" writes:
> I'm also seeing a weird thing. If I do 'show stats_start_collector;' inside
> psql, it says 'off'.
Yeah, this is the expected state if the collector fails to start for
some reason. Look into the postmaster log for a startup-time message
indicating why; but again, I'll bet
"Naomi Walker" wrote:
> How does one tell how much space is available (probably by table),
> after a "vacuum verbose analyze" has completed.
Personally, I mostly pay attention to these lines:
N pages contain useful free space.
(where N is a large number).
We grep for large number
29 matches
Mail list logo