Postgresql 8.0.4 using plpgsql
The basic function is set up as:
CREATE FUNCTION add_data(t_row mytable) RETURNS VOID AS $func$
DECLARE
newtable text;
thesql text;
BEGIN
INSERT INTO newtable thename from mytable where lookup.id =
t_row.id;
thesql := 'INSERT INTO ' || newtable ||
On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote:
I have a postgresql 7.4.8-server with 4 GB ram.
snip
#effective_cache_size = 1000# typically 8KB each
This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
changed it to:
effective_cache_size = 27462#
On Tue, 2005-10-11 at 16:54 +0200, Claus Guttesen wrote:
I have a postgresql 7.4.8-server with 4 GB ram.
#effective_cache_size = 1000# typically 8KB each
This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
changed it to:
effective_cache_size = 27462#
On Tue, 2005-07-26 at 10:50 -0600, Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to
On Tue, 2005-07-19 at 16:28 -0400, Oliver Crosby wrote:
If it is possible try:
1) wrapping many inserts into one transaction
(BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
handle less transactions per second (each your insert is a transaction), it
may work faster.
On Tue, 2005-07-19 at 17:04 -0400, Oliver Crosby wrote:
since triggers work with COPY, you could probably write a trigger that
looks for this condition and does the ID processsing you need; you could
thereby enjoy the enormous speed gain resulting from COPY and maintain
your data
Stacy White presumably uttered the following on 06/01/05 23:42:
We're in the process of buying another Opteron server to run Postgres, and
based on the suggestions in this list I've asked our IT director to get an
LSI MegaRaid controller rather than one of the Adaptecs.
But when we tried to
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end
and are there other options I may have missed?
Sven Willenberger
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Mon, 2004-12-13 at 17:43 -0500, Tom Lane wrote:
Sven Willenberger [EMAIL PROTECTED] writes:
explain analyze select storelocation,order_number from custacct where
referrer = 1365 and orderdate between '2004-12-07' and '2004-12-07
12:00:00' order by custacctid limit 10
Andrew McMillan wrote:
On Mon, 2004-12-13 at 01:13 -0500, Sven Willenberger wrote:
I have a question regarding a serious performance hit taken when using a
LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB
of memory. The table in question contains some 25 million rows
I have a question regarding a serious performance hit taken when using a
LIMIT clause. I am using version 7.4.6 on FreeBSD 4.10-STABLE with 2GB
of memory. The table in question contains some 25 million rows with a
bigserial primary key, orderdate index and a referrer index. The 2
select
(Originally asked in [General], realized that it would probably be
better asked in [Perform]:
I am curious as to how much overhead building a dynamic query in a
trigger adds to the process. The example:
Have a list of subcontractors, each of which gets unique pricing. There
is a total of
Josh Berkus wrote:
Jeff,
I'm curious about the problem's you're seeing with Dell servers since
we're about to buy some 750s, 2850s and 1850s.
The problems I've been dealing with have been on the *650s. They're the ones
you name.
FYI ... the 750s, 1850s and 2850s use Intel chipsets (E7520
14 matches
Mail list logo