Do all of the commands to swap tables in a transaction. The table
gets locked briefly but should have a lot less impact then the update
command.
On Mon, 06 Sep 2004 01:28:04 +0200, Marinos J. Yannikos <[EMAIL PROTECTED]> wrote:
>
> That said, I'm not entirely sure how well postgres' client libr
Ron St-Pierre wrote:
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is indexed.
Have you thought of / tried using 2 separate databases or table
Ron St-Pierre <[EMAIL PROTECTED]> writes:
> Does anyone have some idea on how we can increase speed, either by
> changing the updates, designing the database
> differently, etc, etc? This is currently a big problem for us.
> Other notables:
>The UPDATE is run from a within a function: FOR
Thanks for everyone's comments (Thomas, Steinar, Frank, Matt, William).
Right now I'm bench-marking the time it takes for each step
in the end of day update process and then I am going to test a few things:
- dropping most indexes, and check the full processing time and see if
there is any notice
Ron St-Pierre wrote:
Yes, I know that it's not a very good idea, however queries are allowed
against all of those columns. One option is to disable some or all of the
indexes when we update, run the update, and recreate the indexes,
however it may slow down user queries. Because there are so many
> >That looks like poor database normalization, really. Are you
> sure you
> >don't want to split this into multiple tables instead of having 62
> >columns?
> >
> No, it is properly normalized. The data in this table is stock
> fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.
On Tue, Aug 31, 2004 at 11:35:38AM -0700, Ron St-Pierre wrote:
> We're doing it in multiple transactions within the function. Could we do
> something like this?:
>
>
> BEGIN
> FOR rec IN SELECT field01, field02, ... FROM otherTable LOOP
> RETURN NEXT rec;
> UPDATE dataTable SET field01=r
On Tue, 31 Aug 2004 11:11:02 -0700
Ron St-Pierre <[EMAIL PROTECTED]> wrote:
>Postgres 7.4.3
>debian stable
>2 GB RAM
>80 DB IDE drive (we can't change it)
>
>shared_buffers = 2048
>sort_mem = 1024
>max_fsm_pages = 4
>checkpoint_segments = 5
>random_page
Steinar H. Gunderson wrote:
On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is
Thomas F. O'Connell wrote:
What is the datatype of the id column?
The id column is INTEGER.
Ron
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:
> We have a web based application with data that is updated daily. The
> biggest bottleneck occurs when we try to update
> one of the tables. This table contains 58,000 rows and 62 columns, and
> EVERY column is indexed.
That is usua
What is the datatype of the id column?
-tfo
On Aug 31, 2004, at 1:11 PM, Ron St-Pierre wrote:
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is indexed. Every column is
queryable (?) by the users through the web interface so we are
relu
13 matches
Mail list logo