[Greg Smith - Fri at 12:53:55AM -0400]
> Munin is a very interesting solution to this class of problem. They've
> managed to streamline the whole data collection process by layering clever
> Perl hacks three deep. It's like the anti-SNMP--just build the simplest
> possible interface that will
On Thu, 3 May 2007, Alexander Staubo wrote:
I have a bunch of plugin scripts for Munin that collect PostgreSQL
statistics. I have been considering tarring them up as a proper release
at some point.
Excellent plan. Pop out a tar file, trade good ideas with Tobias, have
some other people play
On Thu, 3 May 2007, Josh Berkus wrote:
So any attempt to determine "how fast" a CPU is, even on a 1-5 scale,
requires matching against a database of regexes which would have to be
kept updated.
This comment, along with the subsequent commentary today going far astray
into CPU measurement lan
On 5/3/07, Fei Liu <[EMAIL PROTECTED]> wrote:
Hello, Andreas, I too am having exactly the same issue as you do.
Comparing my partitioned and plain table performance, I've found that
the plain tables perform about 25% faster than partitioned table. Using
'explain select ...', I see that constraint
On Thu, 3 May 2007, Carlos Moreno wrote:
> error like this or even a hundred times this!! Most of the time
> you wouldn't, and definitely if the user is careful it would not
> happen --- but it *could* happen!!! (and when I say could, I
> really mean: trust me, I have actually seen it
That would be a valid argument if the extra precision came at a
considerable cost (well, or at whatever cost, considerable or not).
the cost I am seeing is the cost of portability (getting similarly
accruate info from all the different operating systems)
Fair enough --- as I mentioned, I w
On Thu, 3 May 2007, Carlos Moreno wrote:
I don't think it's that hard to get system time to a reasonable level (if
this config tuner needs to run for a min or two to generate numbers that's
acceptable, it's only run once)
but I don't think that the results are really that critical.
Still
I don't think it's that hard to get system time to a reasonable level
(if this config tuner needs to run for a min or two to generate
numbers that's acceptable, it's only run once)
but I don't think that the results are really that critical.
Still --- this does not provide a valid argument
On Thu, 3 May 2007, Carlos Moreno wrote:
> been just being naive) --- I can't remember the exact name, but I
> remember
> using (on some Linux flavor) an API call that fills a struct with data
> on the
> resource usage for the process, including CPU time; I assume measured
> with precis
been just being naive) --- I can't remember the exact name, but I
remember
using (on some Linux flavor) an API call that fills a struct with
data on the
resource usage for the process, including CPU time; I assume measured
with precision (that is, immune to issues of other applications runni
On Thu, 3 May 2007, Carlos Moreno wrote:
> CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
> fast"
> a CPU is, even on a 1-5 scale, requires matching against a database of
> regexes which would have to be kept updated.
>
> And let's not even get started on Windows.
CPUs, 32/64bit, or clock speeds. So any attempt to determine "how
fast"
a CPU is, even on a 1-5 scale, requires matching against a database of
regexes which would have to be kept updated.
And let's not even get started on Windows.
I think the only sane way to try and find the cpu speed is
On Thu, 3 May 2007, Josh Berkus wrote:
Greg,
I'm not fooled--secretly you and your co-workers laugh at how easy this
is on Solaris and are perfectly happy with how difficult it is on Linux,
right?
Don't I wish. There's issues with getting CPU info on Solaris, too, if you
get off of Sun Hard
> --- Original Message ---
> From: Josh Berkus <[EMAIL PROTECTED]>
> To: pgsql-performance@postgresql.org
> Sent: 03/05/07, 20:21:55
> Subject: Re: [PERFORM] Feature Request --- was: PostgreSQL Performance Tuning
>
>
> And let's not even get started on Windows.
WMI is your friend.
/D
Greg,
> I'm not fooled--secretly you and your co-workers laugh at how easy this
> is on Solaris and are perfectly happy with how difficult it is on Linux,
> right?
Don't I wish. There's issues with getting CPU info on Solaris, too, if you
get off of Sun Hardware to generic white boxes. The bas
The more I think about this thread, the more I'm convinced of 2 things:
1= Suggesting initial config values is a fundamentally different
exercise than tuning a running DBMS.
This can be handled reasonably well by HW and OS snooping. OTOH,
detailed fine tuning of a running DBMS does not appear
On Thu, 2007-05-03 at 10:45 -0400, Greg Smith wrote:
> Today's survey is: just what are *you* doing to collect up the
> information about your system made available by the various pg_stat views?
> I have this hacked together script that dumps them into a file, imports
> them into another databa
[Alexander Staubo - Thu at 04:52:55PM +0200]
> I have been considering tarring them up as a proper release at some
> point. Anyone interested?
Yes.
Eventually I have my own collection as well:
db_activity - counts the number of (all, slow, very slow, stuck "idle in
transaction") queries in prog
On Thu, May 03, 2007 at 10:45:48AM -0400, Greg Smith wrote:
> Today's survey is: just what are *you* doing to collect up the
> information about your system made available by the various pg_stat views?
> I have this hacked together script that dumps them into a file, imports
> them into another
Andreas Haumer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
I'm currently experimenting with PostgreSQL 8.2.4 and table
partitioning in order to improve the performance of an
application I'm working on.
My application is about managing measurement values (lots of!)
I have one table
On 5/3/07, Greg Smith <[EMAIL PROTECTED]> wrote:
Today's survey is: just what are *you* doing to collect up the
information about your system made available by the various pg_stat views?
I have this hacked together script that dumps them into a file, imports
them into another database, and then
Today's survey is: just what are *you* doing to collect up the
information about your system made available by the various pg_stat views?
I have this hacked together script that dumps them into a file, imports
them into another database, and then queries against some of the more
interesting da
Brian Herlihy <[EMAIL PROTECTED]> writes:
> The issue: the second query results in a lower cost estimate. I am wondering
> why the second query plan was not chosen for the first query.
8.1 is incapable of pushing indexable join conditions down below an Append.
Try 8.2.
r
"Brian Herlihy" <[EMAIL PROTECTED]> writes:
> There is a unique index mapping domains to domain_ids.
...
> The issue: the second query results in a lower cost estimate. I am wondering
> why the second query plan was not chosen for the first query.
Well the unique index you mentioned is critical t
Well, the traditional DBMS way of dealing with this sort of
summarization when the tables involved do not fit into RAM is to
create a "roll up" table or tables for the time period commonly
summarized over.
Since it looks like you've got a table with a row per hour, create
another that has a r
25 matches
Mail list logo