void
setup_cancel_handler(void)
{
+ static bool done = false;
+
+ if (!done)
SetConsoleCtrlHandler(consoleHandler, TRUE);
+ done = true;
}
That works, I tried ctrl-c various ways including from within \i copy.
Problem solved!
Merlin
Merlin Moncure [EMAIL PROTECTED] writes:
That works, I tried ctrl-c various ways including from within \i copy.
Problem solved!
Good. I've applied the patch in both HEAD and 8.0 branches.
Since we're very nearly ready to wrap 8.1, would someone with access to
a Windows machine please
Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
That works, I tried ctrl-c various ways including from within \i copy.
Problem solved!
Good. I've applied the patch in both HEAD and 8.0 branches.
Since we're very nearly ready to wrap 8.1, would someone with access to
a
Tom Lane wrote:
Michael Paesold [EMAIL PROTECTED] writes:
Only vacuum will reset the insert times to the lowest possible!
What does the vacuum code do?? :-]
It removes dead tuples. Dead tuples can only arise from update or
delete operations ... so you have not been telling us the whole
On 26 Sep 2002 at 12:28, Michael Paesold wrote:
But why is performance so much degrading? After 1 updates
on a row, the row seems to be unusable without vacuum! I hope
the currently discussed autovacuum daemon will help in such a
situation.
Let mw know if it works. Use CVS BTW.. I am
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
select invoice_id from table order by invoice_id desc limit 1;
should get you the maximum fast if you have a unique index on invoice_id.
Michael Paesold [EMAIL PROTECTED] writes:
So inserting an invoice would also do an update on a single row
of the cs_sequence table, which cause the problems.
Now, with a normal sequence, it works like a charm.
17 sec. for 1 rows and 2-3 sec. for commit.
But why is performance so much
Zeugswetter Andreas SB SD [EMAIL PROTECTED] wrote:
What could you recommend? Locking the table and selecting
max(invoice_id) wouldn't really be much faster, with max(invoice_id)
not using an index...
select invoice_id from table order by invoice_id desc limit 1;
should get you the
Hi,
I am wondering about bad INSERT performance compared against the speed of
COPY. (I use 7.2.2 on RedHat 7.2)
I have a table with about 30 fields, some constraints, some indexes, some
foreign key constraints. I use COPY to import old data. Copying about
10562 rows takes about 19 seconds.
For
Michael Paesold [EMAIL PROTECTED] writes:
To insert another 10562 rows takes about 12 minutes now!!!
See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html
particularly the point about not committing each INSERT as a separate
transaction.
regards,
Tom Lane wrote:
Michael Paesold [EMAIL PROTECTED] writes:
To insert another 10562 rows takes about 12 minutes now!!!
See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html
particularly the point about not committing each INSERT as a separate
transaction.
regards,
Michael Paesold [EMAIL PROTECTED] writes:
To insert another 10562 rows takes about 12 minutes now!!!
As I said I wrote a function to insert the rows (PL/pgSQL). All values were
inserted inside a single function call; I always though that a function call
would be executed inside a transaction
Update:
vacuum full; vacuum analyze;
select bench_invoice(1000); select bench_invoice(1000); ... (10 times)
It seems performance is degrading with every insert!
Here is the result (time in seconds in bench_invoice(), commit between
selects just under a second)
13, 24, 36, 47, 58, 70, 84,
Only vacuum will reset the insert times to the lowest possible!
What does the vacuum code do?? :-]
Please see the manual and the extensive discussions on this point in the
archives. This behaviour is well known -- though undesirable. It is an
effect of the multi-version concurrency control
Michael Paesold [EMAIL PROTECTED] writes:
Only vacuum will reset the insert times to the lowest possible!
What does the vacuum code do?? :-]
It removes dead tuples. Dead tuples can only arise from update or
delete operations ... so you have not been telling us the whole
truth. An insert-only
15 matches
Mail list logo