Excelent !
You are right
Thanks a lot
Sabin
"Craig Ringer" wrote in message
news:4eb0a920.1010...@ringerc.id.au...
> On 11/01/2011 10:01 PM, Sabin Coanda wrote:
>> Hi there,
>>
>> I have the function:
>> CREATE OR REPLACE FUNCTION "Test"( ...
Hi there,
I have the function:
CREATE OR REPLACE FUNCTION "Test"( ... )
RETURNS SETOF record AS
$BODY$
BEGIN
RETURN QUERY
SELECT ...;
END;
$BODY$
LANGUAGE 'plpgsql' STABLE
The function call takes about 5 minute to proceed, but using directly its
query statement, after replacing the arguments w
Hi there,
I have a simple aggregate query: SELECT count("PK_ID") AS "b1" FROM "tbA"
WHERE "PK_ID" > "f1"( 'c1' ), which has the following execution plan:
"Aggregate (cost=2156915.42..2156915.43 rows=1 width=4)"
" -> Seq Scan on "tbA" (cost=0.00..2137634.36 rows=7712423 width=4)"
"Filt
I have just a function returning a cursor based on a single coplex query.
When I check the execution plan of that query it takes about 3 seconds. Just
when it is used inside the function it freezes.
This is the problem, and this is the reason I cannot imagine what is happen.
Also I tried to rec
Maybe other details about the source of the problem help.
The problem occured when I tried to optimize the specified function. It was
running in about 3 seconds, and I needed to be faster. I make some changes
and I run the well known "CREATE OR REPLACE FUNCTION ..." After that, my
function exec
Hi there,
I have a function which returns setof record based on a specific query.
I try to check the execution plan of that query, so I write EXPLAIN ANALYZE
before my select, I call the function and I see the result which shows an
actual time about 5 seconds. But when I call my function after I
Hi there,
I use different functions returning setof record, and they are working well.
The problem is the performance when I use those functions in joins, for
instance:
SELECT *
FROM "Table1" t1
JOIN "Function1"( a1, a2, ... aN ) AS f1( ColA int4, ColB
varchar,
I use postgresql-8.2-506.jdbc3.jar, maybe helps
Sabin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Scott,
I think it would be nice to log the reasons why an explain analyze chooses a
specific way or another for an execution plan. This would avoid wasting time
to find the source of these decisions from the existing logs.
Is it possible ?
TIA,
Sabin
--
Sent via pgsql-performance maili
Hi there,
I am still concerned about this problem, because there is a big differences
between the two cases, and I don't know how to identify the problem. Can
anybody help me, please ?
TIA,
Sabin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
>
> have you considered importing to a temporary 'holding' table with
> copy, then doing 'big' sql statements on it to check constraints, etc?
>
Yes I considered it, but the problem is the data is very tight related
between different tables and is important to keep the import order of each
entit
> long running transactions can be evil. is there a reason why this has
> to run in a single transaction?
This single transaction is used to import new information in a database. I
need it because the database cannot be disconected from the users, and the
whole new data has to be consistently. T
age
news:[EMAIL PROTECTED]
> have you use VACUMM?
>
> --- On Fri, 7/18/08, Sabin Coanda <[EMAIL PROTECTED]> wrote:
>
>> From: Sabin Coanda <[EMAIL PROTECTED]>
>> Subject: [PERFORM] long transaction
>> To: pgsql-performance@postgresql.org
>> Date: Friday,
Hi there,
I have a script which includes 3 called functions within a single
transaction.
At the beginning, the functions runs fast enough (about 60 ms each). In
time, it begins to run slower and slower (at final about one per 2 seconds).
I check the functions that runs slowly outside the s
Hi there,
I have an application accessing a postgres database, and I need to estimate
the following parameters:
- read / write ratio
- reads/second on typical load / peak load
- writes/second on typical load / peak load
Is there any available tool to achieve that ?
TIA,
Sabin
--
Sent via
Hi there,
I have a database with lowest possible activity. I run VACUUM FULL AND I get
the following log result:
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your jo
sorry for the previous incomplete post. I continue with the log:
NOTICE: relation "pg_shdepend" TID 11/1: DeleteTransactionInProgress
2657075 --- can't shrink relation
NOTICE: relation "pg_shdepend" TID 11/2: DeleteTransactionInProgress
2657075 --- can't shrink relation
.
NOTICE: relation
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Fri, 22 Jun 2007, Sabin Coanda wrote:
>
>> Instead of (or in addition to) configure dozens of settings, what do you
>> say about a feedback adjustable control based on the existing system
>> st
""Campbell, Lance"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL
PROTECTED]
Below is a link to the HTML JavaScript configuration page I am creating:
http://www.webservices.uiuc.edu/postgresql/
I had many suggestions. Based on the feedback I received, I put together the
i
Hi there,
Reading different references, I understand there is no need to vacuum a
table where just insert actions perform. So I'm surprising to see a table
with just historical data, which is vacuumed at the nightly cron with a
simple VACUUM VERBOSE on about 1/3 of indexes amount.
Take a look
""Guillaume Smet"" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Sabin,
>
> On 6/14/07, Sabin Coanda <[EMAIL PROTECTED]> wrote:
>> I'd like to understand completely the report generated by VACUUM VERBOSE.
>> Please tell m
Hi Guillaume,
Very interesting !
Merci beaucoup,
Sabin
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Hi there,
I'd like to understand completely the report generated by VACUUM VERBOSE.
Please tell me where is it documented ?
TIA,
Sabin
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Hi Bill,
...
>
> However, you can get some measure of tracking my running VACUUM VERBOSE
> on a regular basis to see how well autovacuum is keeping up. There's
> no problem with running manual vacuum and autovacuum together, and you'll
> be able to gather _some_ information about how well autovac
Hi there,
Using explicitly VACUUM command give me the opportunity to fine tune my
VACUUM scheduling parameters, after I analyze the log generated by VACUUM
VERBOSE.
On the other hand I'd like to use the auto-vacuum mechanism because of its
facilities. Unfortunately, after I made some initial e
Hi all,
A vacuum full command logs the message:
... LOG: transaction ID wrap limit is 1073822617, limited by database "A"
Sometimes ago, the vacuum full logged:
... LOG: transaction ID wrap limit is 2147484148, limited by database "A"
What causes that difference of the limit ? Should I set or
26 matches
Mail list logo