Hi list,
I have experienced the following situation: A join between two tables
(one with ~900k rows, the other with ~1600k rows) takes about 20 sec on
our productive database. I have created two tables in our test database
with the same data, but with fewer fields. (I have omitted several
Hello!
The transaction model is discussed in several areas:
http://www.packtpub.com/article/transaction-model-of-postgresql
The POSTGRES Data Model (1987)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8578
The Design of POSTGRES (1986)
Christian Schröder wrote:
in our PostgreSQL 8.2.9 database I have these tables:
create table table1 (
key1 char(12),
key2 integer,
primary key (key1, key2)
);
create table table2 (
key1 char(12),
key2 integer,
key3 varchar(20),
primary
Hi
Why don't use pgAgent http://pgadmin.org/docs/dev/pgagent.html ?
Regards,
Christophe Chauvet.
Tino Wildenhain a écrit :
Hi,
searchelite wrote:
...
i can use pg_sleep..but i have thousands of data to be inserted..is
there
any better way using pg_sleep?
I wonder what is you complete
Craig Ringer wrote on 20.11.2008 12:12:
There's a daily snapshot available for download off the main dl tree.
Go to the download page and look for the link to browse the mirrors
directly. Pick on and look for snapshot. You get to compile it
yourself, but it's pretty easy to do.
... on a
Christian
i would suggest ensuring results will be pre-ordered (according to the column
to be merged)
anyone?
Martin
__
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official business
of Sender.
Hi list,
I have found the following problem: I have declared a domain datatype
with a check constraint. The check constraint uses a plpgsql function:
CREATE FUNCTION domain_ok(value integer) RETURNS boolean AS $$
BEGIN
RETURN value 0;
END;
$$ LANGUAGE plpgsql;
CREATE DOMAIN testdomain
Christian Schröder c...@deriva.de writes:
Where does this difference come from? Pure cpu performance? Do the additional
fields in the productive database have an impact on the performance? Or do I
miss something?
Sure, more data takes more time to process.
Other factors which could affect it
=?ISO-8859-1?Q?Christian_Schr=F6der?= c...@deriva.de writes:
When I try to call the function I get the following error message:
test=# select * from testfunc();
ERROR: error from Perl function: SPI_connect failed: SPI_ERROR_CONNECT
at line 2.
Hmph ... looks like plperl is shy a few
Hi!
Does Postgres needs to wait for this lock
or is this something that can be fixed?
Create data:
create table test_table
(
id varchar primary key
);
insert into test_table
select i::text
from generate_series(1000,10) i;
Session 1:
select * from test_table where id like
Rikard Pavelic rikard.pave...@zg.htnet.hr writes:
Does Postgres needs to wait for this lock
yes --- drop index takes exclusive lock on the table.
or is this something that can be fixed?
no
regards, tom lane
--
Sent via pgsql-general mailing list
Aaron Burnett wrote:
The first calls a function which essentially calls a handful of views.
psql -d DB1 -c 'select execute_function_foo();'
calls a handful of views?
What I am trying to get is the elapsed time logged for each individual query
or view that the function calls, as though I
alvar...@alvarezp.ods.org wrote:
(2) per-user schemas
with CREATE TABLE (LIKE parent_table), as getting the data from all
users at once would also be difficult and modifying the column
definition on the user tables would be pretty much error-prone.
I'd think about this one, personally. You
Gregory Stark wrote:
Christian Schröder c...@deriva.de writes:
Where does this difference come from? Pure cpu performance? Do the additional
fields in the productive database have an impact on the performance? Or do I
miss something?
Sure, more data takes more time to process.
Other factors
14 matches
Mail list logo