> Something else worth considering is not using the normal
> catalog methods
> for storing information about temp tables, but hacking that together
> would probably be a rather large task.
But the timings suggest, that it cannot be the catalogs in the worst
case
he showed.
> 0.101 ms BEGIN
> 1.
On Wed, 2006-05-10 at 17:10 -0500, Jim C. Nasby wrote:
> On Thu, May 04, 2006 at 04:45:57PM +0200, Mario Splivalo wrote:
> Well, here's the problem...
>
> > -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
> > (actual time=1074.984..992536.243 rows=57925 loops=1)
> >
On Wed, May 17, 2006 at 01:50:22PM -0400, Chris Mckenzie wrote:
> Hi.
>
> I'm trying to plan for a performance test session where a large database is
> subject to regular hits from my application while both regular and full
> database maintenance is being performed. The idea is to gain a better id
On Wed, May 17, 2006 at 08:54:52AM -0700, Craig A. James wrote:
> Here's a "corner case" that might interest someone. It tripped up one of
> our programmers.
>
> We have a table with > 10 million rows. The ID column is indexed, the
> table has been vacuum/analyzed. Compare these two queries:
On Tue, May 16, 2006 at 07:20:12PM -0700, Craig A. James wrote:
> >Why I want to use offset and limit is for me to create a threaded
> >application so that they will not get the same results.
>
> In order to return rows 1 to 15000, it must select all rows from zero
> to 15000 and then discard
Tom Lane wrote:
There is not anything in there that considers whether the table's
physical order is so nonrandom that the search will take much longer
than it would given uniform distribution. It might be possible to do
something with the correlation statistic in simple cases ...
In this case,
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I suspect it wasn't intended to be a full table scan. But rather a sequential
> scan until it found a matching row. If the data in the table is ordered by
> by id, this strategy may not work out well. Where as if the data is randomly
> ordered, it would
On Wed, 2006-05-17 at 08:54 -0700, Craig A. James wrote:
> Here's a "corner case" that might interest someone. It tripped up one of our
> programmers.
>
> We have a table with > 10 million rows. The ID column is indexed, the table
> has been vacuum/analyzed. Compare these two queries:
>
>
Title: Performance/Maintenance test result collection
Hi.
I'm trying to plan for a performance test session where a large database is subject to regular hits from my application while both regular and full database maintenance is being performed. The idea is to gain a better idea on the impa
Please don't reply to previous messages to start new threads. This makes it
harder to find stuff in the archives and may keep people from noticing your
message.
On Wed, May 17, 2006 at 08:54:52 -0700,
"Craig A. James" <[EMAIL PROTECTED]> wrote:
> Here's a "corner case" that might interest someon
On 17 May 2006, at 16:21, Ruben Rubio Rey wrote:
I have a web page, that executes several SQLs.
So, I would like to know witch one of those SQLs consumes more CPU.
For example,
I have SQL1 that is executed in 1.2 secs and a SQL2 that is
executed in 200 ms.
But SQL2 is executed 25 times and
Here's a "corner case" that might interest someone. It tripped up one of our
programmers.
We have a table with > 10 million rows. The ID column is indexed, the table
has been vacuum/analyzed. Compare these two queries:
select * from tbl where id >= 1000 limit 1;
select * from tbl wh
On Tue, May 16, 2006 at 07:08:51PM -0700, David Wheeler wrote:
> On May 16, 2006, at 18:29, Christopher Kings-Lynne wrote:
>
> >>Yes, but there are definitely programming cases where memoization/
> >>caching definitely helps. And it's easy to tell for a given
> >>function whether or not it real
Hi,
I have a web page, that executes several SQLs.
So, I would like to know witch one of those SQLs consumes more CPU.
For example,
I have SQL1 that is executed in 1.2 secs and a SQL2 that is executed in
200 ms.
But SQL2 is executed 25 times and SQL1 is executed 1 time, so really
SQL2 consum
I have seen MemoryContextSwitchTo taking time before.. However I am not
sure why would it take so much CPU time?
Maybe that function does not work efficiently on Solaris?
Also I donot have much idea about slot_getattr.
Anybody else? (Other option is to use "collect -p $pid" experiments to
gath
We have the 4 core machine. However, these numbers are taken during a
benchmark, not normal work load. So the output should display the system
being working fully ;)
So its postgres doing a lot of work and you already had a look at the
usrcall for that.
The benchmark just tries to do the que
You usertime is way too high for T2000...
If you have a 6 core machine with 24 threads, it says all 24 threads are
reported as being busy with iostat output.
Best way to debug this is use
prstat -amL
(or if you are dumping it in a file prstat -amLc > prstat.txt)
and find the pids with high
17 matches
Mail list logo