On Tue, 2004-11-30 at 19:32, Bruce Momjian wrote:
David Parker wrote:
I've been using log_min_duration_statement = 0 to get durations on all
SQL statements for the purposes of performance tuning, because this logs
the duration on the same line as the statement. My reading of this TODO
is
Simon Riggs wrote:
On Tue, 2004-11-30 at 19:32, Bruce Momjian wrote:
David Parker wrote:
I've been using log_min_duration_statement = 0 to get durations on all
SQL statements for the purposes of performance tuning, because this logs
the duration on the same line as the statement. My
IS handy, however
- DAP
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 01, 2004 11:18 AM
To: Simon Riggs
Cc: David Parker; [EMAIL PROTECTED]
Subject: Re: [HACKERS] Increasing the length of
Simon Riggs wrote:
On Tue, 2004-11-30 at 19:32
David Parker wrote:
How difficult would it be to make the combination
log_statement = all
log_duration = true
just put the duration on the same line as the statement? Then
log_min_duration_statement could be used to
do the desired log-at-threshold behavior, which certainly seems
On Wed, 2004-12-01 at 11:47, Bruce Momjian wrote:
David Parker wrote:
How difficult would it be to make the combination
log_statement = all
log_duration = true
just put the duration on the same line as the statement? Then
log_min_duration_statement could be used to
do the
On Wednesday 01 December 2004 03:38, Simon Riggs wrote:
On Tue, 2004-11-30 at 19:32, Bruce Momjian wrote:
David Parker wrote:
I've been using log_min_duration_statement = 0 to get durations on
all SQL statements for the purposes of performance tuning, because this
logs the duration on
Robert Treat wrote:
On Wed, 2004-12-01 at 11:47, Bruce Momjian wrote:
David Parker wrote:
How difficult would it be to make the combination
log_statement = all
log_duration = true
just put the duration on the same line as the statement? Then
Great idea. Added to TODO:
* Make log_min_duration_statement output when the duration is reached rather
than when the statement completes
This prints long queries while they are running, making trouble shooting
easier. Also, it eliminates the need for log_statement because it
would
but no total duration?
- DAP
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bruce Momjian
Sent: Tuesday, November 30, 2004 1:20 PM
To: Simon Riggs
Cc: Andrew Sullivan; [EMAIL PROTECTED]
Subject: Re: [HACKERS] Increasing the length of
Great idea. Added to TODO
David Parker wrote:
I've been using log_min_duration_statement = 0 to get durations on all
SQL statements for the purposes of performance tuning, because this logs
the duration on the same line as the statement. My reading of this TODO
is that now log_min_duration_statement = 0 would give me
Could we come up with a compromise then? I guess maybe another setting
that says log any query when it hits more than x amount of time. (I'd
also argue you should get a NOTICE or WARNING when this exceeds the
query timeout time).
A perhapse more friendly alternative would be a way to query to get
Jim C. Nasby wrote:
Could we come up with a compromise then? I guess maybe another setting
that says log any query when it hits more than x amount of time. (I'd
also argue you should get a NOTICE or WARNING when this exceeds the
query timeout time).
A perhapse more friendly alternative
On Wed, 2004-11-10 at 22:51, Andrew Sullivan wrote:
On Wed, Nov 10, 2004 at 09:52:17PM +, Simon Riggs wrote:
On Wed, 2004-11-10 at 21:48, Richard Huxton wrote:
Isn't that:
log_min_duration_statement (integer)
That gets written when a statement completes, not during
On Wed, 2004-11-10 at 17:57, Andrew Sullivan wrote:
On Wed, Nov 10, 2004 at 05:51:01PM -0500, Andrew Sullivan wrote:
log_statement_after_min_duration (integer) -- which did what Simon
wants.
Uh, well, not what Simon wants, of course, but which gave us a useful
capability anyway. I
On Thu, Nov 11, 2004 at 10:20:43AM -0500, Robert Treat wrote:
I believe the geeky non-helpful answer is to attach to the process with
gdb and do p debug_query_string which I believe will show you said long
running query.
Yes, this will work, I've used it. But of course, you don't
actually
On 11/8/2004 5:32 PM, Tom Lane wrote:
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
The pgstats were originally designed to give hints for tuning. That's
why they cover cache hits vs. misses per table and
On Wed, 2004-11-10 at 20:25, Jan Wieck wrote:
On 11/8/2004 5:32 PM, Tom Lane wrote:
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
The pgstats were originally designed to give hints for tuning. That's
Simon Riggs wrote:
On Mon, 2004-11-08 at 22:32, Tom Lane wrote:
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
This is only available if you log all queries, which isn't normally done
while you are in
Hi Richard
On Wed, 2004-11-10 at 21:48, Richard Huxton wrote:
Simon Riggs wrote:
On Mon, 2004-11-08 at 22:32, Tom Lane wrote:
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
This is only available
On Wed, Nov 10, 2004 at 09:52:17PM +, Simon Riggs wrote:
On Wed, 2004-11-10 at 21:48, Richard Huxton wrote:
Isn't that:
log_min_duration_statement (integer)
That gets written when a statement completes, not during execution.
I've been following this thread, and I was thinking the
On Wed, Nov 10, 2004 at 05:51:01PM -0500, Andrew Sullivan wrote:
log_statement_after_min_duration (integer) -- which did what Simon
wants.
Uh, well, not what Simon wants, of course, but which gave us a useful
capability anyway. I agree that the full-bore profiling for the DBA
would be awful
Simon Riggs [EMAIL PROTECTED] writes:
My intention was towards a data warehouse situation, and my comments are
only relevant in that context. Possibly 25+% of the user base use this
style of processing. In that case, I expect queries to run for minutes
or hours.
I come from the opposite
Josh Berkus wrote:
Tom,
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
I don't know about you, but I don't have any tools that are designed to
cope nicely with looking at tables that have columns that might be
On Mon, 2004-11-08 at 22:32, Tom Lane wrote:
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
This is only available if you log all queries, which isn't normally done
while you are in production. When you hit a
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
What do you think is broken about fragmented UDP packets?
Fragmentation happens at the IP protocol level, the kernel is responsible for
reassembly. There's nothing for the application level to handle.
And, by the same token, on platforms
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
What do you think is broken about fragmented UDP packets?
Fragmentation happens at the IP protocol level, the kernel is responsible
for
reassembly. There's nothing for the application level to handle.
And, by
Greg Stark [EMAIL PROTECTED] writes:
The pgstat messages are indeed fixed size.
No, there's a fixed maximum size.
regards, tom lane
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
The pgstat messages are indeed fixed size.
No, there's a fixed maximum size.
Hm. *rereads source*
It's true, pgstat_report_activity only sends the actual size of the query, not
the full payload size.
The only problem
Greg Stark [EMAIL PROTECTED] writes:
The only problem I see in raising the size of PGSTAT_MSG_PAYLOAD is that it
also governs the size of PGSTAT_NUM_TABPURGE and PGSTAT_NUM_TABENTRIES.
There's no need to grow those arrays and risk losing them. But these message
sizes could just be left based
Greg Stark [EMAIL PROTECTED] writes:
That would have no downside and only benefits. The worst case is that a
machine that didn't handle UDP fragment reassembly would drop the packets that
postgres is currently dropping preemptively.
Huh? We're not dropping the query *entirely*, which is what
Tom Lane wrote:
It's really a
performance issue: do you want to pay the penalty associated with
reassembling messages that exceed the loopback MTU [...]
BTW, the loopback MTU here is quite large:
[EMAIL PROTECTED]:~$ /sbin/ifconfig lo | grep MTU
UP LOOPBACK RUNNING MTU:16436 Metric:1
Tom,
Another relevant question is why you are expecting to get this
information through pgstats and not by looking in the postmaster log.
I don't know about you, but I don't have any tools that are designed to
cope nicely with looking at tables that have columns that might be many
K wide.
Tom Lane [EMAIL PROTECTED] writes:
I'd vote in favour of relaxing the limit entirely, as Sean suggests.
The choice is not between limit and no limit, it is between
limit and broken.
What do you think is broken about fragmented UDP packets?
Once Upon a Time fragmented UDP packets
On Sun, 2004-11-07 at 20:59, Greg Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
I'd vote in favour of relaxing the limit entirely, as Sean suggests.
The choice is not between limit and no limit, it is between
limit and broken.
What do you think is broken about fragmented UDP
Simon Riggs [EMAIL PROTECTED] writes:
On Sun, 2004-11-07 at 20:59, Greg Stark wrote:
What do you think is broken about fragmented UDP packets?
...probably that pgstat.c doesn't handle them at all, so if they occur
then you've lost data. Until that is fixed, we have a limit.
Greg Stark [EMAIL PROTECTED] writes:
What do you think is broken about fragmented UDP packets?
Fragmentation happens at the IP protocol level, the kernel is responsible for
reassembly. There's nothing for the application level to handle.
And, by the same token, on platforms where it is broken
Sean Chittenden wrote:
Is there any reason the length of
pg_catalog.pg_stat_activity.current_query is capped at 255 characters?
Why can't it be a pointer to the currently running query?
Seems silly to me and is a PITA to try and use as a debugging tool only
to find out that the query in
Sean Chittenden [EMAIL PROTECTED] writes:
Is there any reason the length of
pg_catalog.pg_stat_activity.current_query is capped at 255 characters?
The reason for a limit is to avoid fragmentation of UDP messages.
I believe we've set it at 1K for 8.0, though, and if you are on
a platform with
Is there any reason the length of
pg_catalog.pg_stat_activity.current_query is capped at 255 characters?
The reason for a limit is to avoid fragmentation of UDP messages.
I believe we've set it at 1K for 8.0, though, and if you are on
a platform with a higher message size limit you could raise it
Sean Chittenden [EMAIL PROTECTED] writes:
I'm confused... UDP as in the UDP/IP? RPC caps UDP messages at 8K and
NFS over UDP often runs at 32K... where is UDP used in the backend?
pgstat messages travel over UDP/IP.
regards, tom lane
I'm confused... UDP as in the UDP/IP? RPC caps UDP messages at 8K and
NFS over UDP often runs at 32K... where is UDP used in the backend?
pgstat messages travel over UDP/IP.
Over the loopback interface, right? Then why worry about
fragmentation? This seems like premature
Sean Chittenden [EMAIL PROTECTED] writes:
Would you be open to increasing this further after the 8.0 release?
Nope.
I haven't heard of anyone complaining about dropped/fragmented pgstat
messages. :) -sc
That's because we keep 'em small enough to not fragment.
On Sat, 2004-11-06 at 19:44, Tom Lane wrote:
Sean Chittenden [EMAIL PROTECTED] writes:
Is there any reason the length of
pg_catalog.pg_stat_activity.current_query is capped at 255 characters?
The reason for a limit is to avoid fragmentation of UDP messages.
I believe we've set it at 1K
Simon Riggs [EMAIL PROTECTED] writes:
Not having the whole query is painful. Raising it to 1K doesn't get
round the fact that it's the longer queries that tend to be the more
painful ones, and so they are the ones you want to trap in full and
EXPLAIN, so you can find out if they are *ever*
Sean Chittenden [EMAIL PROTECTED] writes:
Having a 1K query isn't uncommon on some of the stuff I work on, an 8K
query...
that's a tad different and would stick out like a sore thumb.
Just as a point of reference, I've been processing my logs to see how large my
queries work out to. They
45 matches
Mail list logo