Hello,
There's a slow UPDATE query in my logs (~60s). However, when I run it
manually, it's really fast ( < 0.5ms).
2016-11-23 18:13:51.962 GMT [742-25]: bpd_production bpd@web001(40916)
0 Passenger RubyApp: /var/www/bpd/current (production) LOG: duration:
59876.947 ms statement: UPDATE "co
> On 14 Apr 2016, at 07:17, Adam Brusselback wrote:
>
> So fair enough, it does seem to be related to the lookup rather than
> maintenance on the index. I was misguided in my initial assumption.
>
> Spent quite a bit of time trying to come up with a self contained test, and
> it seems like
So fair enough, it does seem to be related to the lookup rather than
maintenance on the index. I was misguided in my initial assumption.
Spent quite a bit of time trying to come up with a self contained test, and
it seems like I can't make it choose the GiST index unless I remove the
regular btree
> On 13 Apr 2016, at 20:14, Adam Brusselback wrote:
>
> Sorry, brain stopped working and I forgot to include the normal info.
>
> Postgres version: 9.5.1
> Hardware: 2 core, 4gb Digital Ocean virtual server
> OS: Debian
>
> explain analyze for an example update:
> 'Update on price_generated
Sorry, brain stopped working and I forgot to include the normal info.
Postgres version: 9.5.1
Hardware: 2 core, 4gb Digital Ocean virtual server
OS: Debian
explain analyze for an example update:
'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual
time=29329.614..29329.614 ro
Hey all, been running into some performance issues with one of my tables,
and it seems to be centered around index maintenance.
I have a table to store aggregated prices that are derived from sale data
over a configurable period, and a function that runs periodically that
inserts new prices if nec
On Thu, Jan 21, 2010 at 11:14 AM, elias ghanem wrote:
> So the average execution time of the function is around 2.5 mins, meaning
> that the update query (+ the vacuum) takes 2.5 mins to execute. So is this a
> normal behavior? (The same function in oracle with the same environment
> (with our vac
Hi,
For the explain analyze here's the output:
"Seq Scan on in_sortie (cost=0.00..171140.19 rows=114449 width=84) (actual
time=15.074..28461.349 rows=99611 loops=1)"
" Output: type, site_id, fiche_produit_id, numero_commande, ligne_commande,
date_sortie, quantite_sortie, date_livraison_souh
elias ghanem wrote:
> Actually this query is inside a function and this function is called
> from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT
> -d $DB_NAME -U $DB_USER -c "SELECT testupdate()"
>
> (the function is called 100 times with a vacuum analyze after each call
> f
"elias ghanem" wrote:
> here's more details as you requested
You didn't include an EXPLAIN ANALYZE of the UPDATE statement.
> -The version of postgres is 8.4 (by the way select pg_version() is
> not working but let's concentrate on the query issue)
As far as I know, there is no pg_version(
Hi,
Thanks for your help, here's more details as you requested:
-The version of postgres is 8.4 (by the way select pg_version() is not
working but let's concentrate on the query issue)
Here's the full definition of the table with it's indices:
-- Table: in_sortie
-- DROP TABLE in_sortie;
"elias ghanem" wrote:
> I'm not sure this is the right place to ask my question
Yes it is. You gave a lot of good information, but we'd have a
better shot at diagnosing the issue with a bit more. Please read
the following and resubmit with as much of the requested information
as you can. No
Hi,
I'm not sure this is the right place to ask my question, so please if it is
not let me know where I can get an answer from.
I'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing
an update query with a where clause that updates approximately 100 000 rows
in a table contain
Greg Smith wrote:
In practice, the "ordered"
mode (the default for ext3) seems sufficient to prevent database
corruption. There is a substantial performance hit to running in full
journal mode like you're doing;
where do you see which mode I am running in? I havent specified any
modes in
On Thu, 4 Sep 2008, Thomas Finneid wrote:
I am unsure if I need a journal in the fs or if the db covers that problem.
There are some theoretical cases where the guarantees of ext3 seems a
little weak unless you've turned the full journal on even in a database
context (we just had a long thre
What about filesystem properties?
on linux I am using:
ext3(with journal) and auto,rw,async,noatime,nodiratime
on disks for data and journal
I am unsure if I need a journal in the fs or if the db covers that
problem. With regards to that, do I then need to set some linux setting
to forc
On Mon, 1 Sep 2008, Scott Carey wrote:
Thanks for the info on the patch to support it -- however the versions
posted there are rather old...
Over here, we're using an extremely old patched version of the JDBC
driver. That's the patch I sent to some mailing list a couple of years
ago. It works
On Tue, 2 Sep 2008, Thomas Finneid wrote:
You should try setting this to open_sync , that can be considerably faster
for some write-heavy situations. Make sure to test that throughly though,
there are occasional reports of issues with that setting under Linux
The production machine is Solari
Greg Smith wrote:
On Mon, 1 Sep 2008, Thomas Finneid wrote:
Thanks for all the info on the disk controller, I will have to look
through all that now :)
I note that nobody has talked about your postgresql.conf yet. I assume
you've turned autovacuum off because you're not ever deleting thing
On Mon, 1 Sep 2008, Thomas Finneid wrote:
It does have a sata raid controller, but not have the battery pack, because
its a develmachine and not a production machine, I thought it was not needed.
But if you are saying the battery pack enables a cache which enables faster
disk writes I will con
On Mon, Sep 1, 2008 at 12:41 PM, Gregory Stark <[EMAIL PROTECTED]>wrote:
> "Scott Carey" <[EMAIL PROTECTED]> writes:
>
> > On Raid Controllers and Dev machines:
> >
> > For a dev machine the battery backup is NOT needed.
> >
> > Battery back up makes a _production_ system faster: In production, d
On Mon, Sep 1, 2008 at 2:42 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> dialog box saying this is a bad idea. Now, if it would take you a day
> of downtime to get a dev database back in place and running after a
> power loss, then the bbu may be worth the $200 or so.
I just wanted to comment t
On Mon, Sep 1, 2008 at 2:32 PM, Thomas Finneid
<[EMAIL PROTECTED]> wrote:
>
> Scott Carey wrote:
>>
>> For a development box, just enable write-back caching regardless of the
>> battery back up situation. As long as its not your only copy of
>
> Will have a look at it, the data is not important an
>
>
>
> On the performance impact of using COPY instead of INSERT : out of
>> curiosity, were you comparing COPY against raw row-by-row inserts (slow) or
>> JDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c)
>> values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12) ?
>>
>
> I
Scott Carey wrote:
For a development box, just enable write-back caching regardless of the
battery back up situation. As long as its not your only copy of
Will have a look at it, the data is not important and can be reproduced
any time on any machine. The controller I have is a Areca ARC-12
"Scott Carey" <[EMAIL PROTECTED]> writes:
> On Raid Controllers and Dev machines:
>
> For a dev machine the battery backup is NOT needed.
>
> Battery back up makes a _production_ system faster: In production, data
> integrity is everything, and write-back caching is dangerous without a
> battery
On Raid Controllers and Dev machines:
For a dev machine the battery backup is NOT needed.
Battery back up makes a _production_ system faster: In production, data
integrity is everything, and write-back caching is dangerous without a
battery back up.
So:
Without BBU: Write-through cache = data
On Mon, Sep 1, 2008 at 5:29 AM, Thomas Finneid
<[EMAIL PROTECTED]> wrote:
> It does have a sata raid controller, but not have the battery pack, because
> its a develmachine and not a production machine, I thought it was not
> needed. But if you are saying the battery pack enables a cache which ena
Craig Ringer wrote:
Just on a side note, your system is pretty strangely heavy on CPU
compared to its RAM and disk configuration. Unless your workload in Pg
is computationally intensive or you have something else hosted on the
same machine, those CPUs will probably sit mostly idle.
Its a devel
Scott Carey wrote:
Are you even getting COPY to work with JDBC? As far as I am aware, COPY
doesn't work with JDBC at the moment:
I used a patched jdbc driver, provided by someone on the list, dont have
the reference at hand. It works perfectly and its about 5 times faster,
for my job, than
Are you even getting COPY to work with JDBC? As far as I am aware, COPY
doesn't work with JDBC at the moment:
http://jdbc.postgresql.org/todo.html Listed in the todo page, under "PG
Extensions" is "Add support for COPY." I tried to use it with JDBC a
while ago and gave up after a couple limit
Thomas Finneid wrote:
> Hi
>
> I am working on a table which stores up to 125K rows per second and I
> find that the inserts are a little bit slow. The insert is in reality a
> COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an
> index, is fast enough, about 150ms. With the index,
Scott Carey wrote:
You may want to investigate pg_bulkload.
http://pgbulkload.projects.postgresql.org/
One major enhancement over COPY is that it does an index merge, rather
than modify the index one row at a time.
This is a command line tool, right? I need a jdbc driver tool, is that
pos
You may want to investigate pg_bulkload.
http://pgbulkload.projects.postgresql.org/
One major enhancement over COPY is that it does an index merge, rather than
modify the index one row at a time.
http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf
On Sun, Aug 31, 2008 at
Hi
I am working on a table which stores up to 125K rows per second and I
find that the inserts are a little bit slow. The insert is in reality a
COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an
index, is fast enough, about 150ms. With the index, the insert takes
about 500ms
On Mon, May 19, 2008 at 11:56 PM, kevin kempter
<[EMAIL PROTECTED]> wrote:
> Hi all;
>
> I have a query that does this:
>
> update tab_x set (inactive_dt, last_update_dt) =
> ((select run_dt from current_run_date), (select run_dt from
> current_run_date))
> where
> cust_id::text || loc_id::text in
am Mon, dem 19.05.2008, um 23:56:27 -0600 mailte kevin kempter folgendes:
> Hi all;
>
> I have a query that does this:
>
> update tab_x set (inactive_dt, last_update_dt) =
> ((select run_dt from current_run_date), (select run_dt from
> current_run_date))
> where
> cust_id::text || loc_id::text
Hi all;
I have a query that does this:
update tab_x set (inactive_dt, last_update_dt) =
((select run_dt from current_run_date), (select run_dt from
current_run_date))
where
cust_id::text || loc_id::text in
(select cust_id::text || loc_id::text from summary_tab);
The current_run_date table h
how about saying:
lock table versions_9d in EXCLUSIVE mode;
UPDATE versions_9d SET flag=2;
commit;
Ismo
On Thu, 22 Feb 2007, Gabriel Biberian wrote:
> Hello,
>
> I experience significant performance issues with postgresql and updates.
> I have a table which contains ~1M rows.
> Layout:
> TOTO
n i tried adding an index to the table on the column date (int) that
> stores unix timestamps.
> TOTO=# CREATE INDEX versions_index ON versions_9d (date);
> (-60M) disk space goes down on index creation
> [EMAIL PROTECTED]:~$ time psql TOTO -c "UPDATE versions_9d SET flag=9"
> UPDATE 976009
> real
On Thu, Feb 22, 2007 at 07:11:42PM +0100, Gabriel Biberian wrote:
> Create a new DB and load a dump of the above database with 976009 rows,
> then i perform updates on the whole table. I recorded the time taken
> for each full update and the amount of extra disk space used. Each
> consecutive
Hello,
I experience significant performance issues with postgresql and updates.
I have a table which contains ~1M rows.
Layout:
TOTO=# \d versions_9d;
Table «public.versions_9d»
Colonne | Type | Modificateurs
++---
h
> Following is one of the update query and it's explain plan which takes
> about 6 mins to execute. I am trying to find a way to execute it faster.
> The functions used in the update statement are if then else test and
> then return one value or the other.
>
Following is one of the update query and it's explain plan which takes about 6
mins to execute. I am trying to find a way to execute it faster. The functions
used in the update statement are if then else test and then return one value or
the other.
===
Mark Lewis <[EMAIL PROTECTED]> writes:
> Out of curiosity, how hard would it be to modify the output of EXPLAIN
> ANALYZE when doing an insert/update to include the index update times
> and/or non-FK constraint checking times and/or the table row update
> times?
I don't think it'd help much --- in
Out of curiosity, how hard would it be to modify the output of EXPLAIN
ANALYZE when doing an insert/update to include the index update times
and/or non-FK constraint checking times and/or the table row update
times? Or any other numbers that might be useful in circumstances like
this. I'm wonderi
Tom Lane a écrit :
I think the conclusion must be that there was just too much I/O to be
done to update all the rows. Have you done any tuning of shared_buffers
and so forth? I recall having seen cases where update performance went
bad as soon as the upper levels of a large index no longer fit
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> Tom Lane a écrit :
>> It seems the time must be going into this trigger function. What
>> does it do?
> A lot of things ! Indeed, if it runs it will very badly hurt performances
> (table
> lookups, string manipulation, etc...) !
> But it should onl
Tom Lane a écrit :
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
I must be missing something, so here is the full table description.
It looks pretty harmless, except for
CREATE TRIGGER parse_log_trigger
BEFORE INSERT
ON statistiques.log
FOR EACH ROW
EXECUTE PROCEDURE statistiq
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> I must be missing something, so here is the full table description.
It looks pretty harmless, except for
> CREATE TRIGGER parse_log_trigger
> BEFORE INSERT
> ON statistiques.log
> FOR EACH ROW
> EXECUTE PROCEDURE statistiques.parse_lo
Tom Lane a écrit :
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
Indeed, the new query does not perform that well :
"Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
time=53.995..246443.811 rows=2020061 loops=1)"
...
"Total runtime: 2777844.892 ms"
I removed all unnecessa
Mark Lewis <[EMAIL PROTECTED]> writes:
> But he's using 8.1.4-- in that version, an explain analyze would list
> the time taken to go through triggers, so the fact that we don't see any
> of those lines means that it can't be constraint checking, so wouldn't
> it have to be the index update overhea
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
-- Mark
On Wed, 2006-12-13 at 11:46 -0500,
asif ali a écrit :
Arnaud,
Have you run "ANALYZE" on the table after creating index?
Yes, I have !
Also make sure that "#effective_cache_size" is set properly. A higher
value makes it more likely to use index scans.
It is set to 50.000. I thought this would be enough, and maybe t
Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> Indeed, the new query does not perform that well :
> "Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
> time=53.995..246443.811 rows=2020061 loops=1)"
> ...
> "Total runtime: 2777844.892 ms"
> I removed all unnecessary indexes on
Arnaud,
Have you run "ANALYZE" on the table after creating index?
Also make sure that "#effective_cache_size" is set properly. A higher
value makes it more likely to use index scans.
Thanks
asif ali
Arnaud Lesauvage <[EMAIL PROTECTED]> wrote: Ragnar a écrit :
> On mið, 2006-12-13 at
Ragnar a écrit :
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
> On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
>> Why is this query better than the other one ? Because it runs the
>> "(field IN ('some','other') AND field2 = '
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
> Jens Schipkowski a écrit :
> > On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
> >> Why is this query better than the other one ? Because it runs the
> >> "(field IN ('some','other') AND field2 = 'Y')" once an
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
Why is this query better than the other one ? Because it runs the
"(field IN ('some','other') AND field2 = 'Y')" once and then executes
the join with the resulting set ?
True. The Subselect in
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage <[EMAIL PROTECTED]>
wrote:
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing
indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y')
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
Hi Jens,
Why is this query better t
Hi,
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
and index t1.uid, t2.uid, t2.field, t2.field2
regards,
Jens
Ragnar a écrit :
Do you think the problem is with the indexes ?
I guess so. are you sure about the index on t1.uid?
what are the column definitions for t1.uid and t2.uid ?
are they the same ?
Man, no !!!
I just checked and indeed, no index on this column. I probably dropped
it lately.
Thank
On mið, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:
> Hi list !
>
> I am running a query to update the boolean field of a table based on
> another table's fields.
>
> The query is (changed names for readability):
> UPDATE t1
> SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')
FROM t2
WHERE t1.uid = t2.uid
t2.uid is the PRIMARY KEY.
t2 only
performance list
Subject
Re: [PER
Patrick Hatcher <[EMAIL PROTECTED]> writes:
> Pg 7.4.5
> Trying to do a update of fields on 23M row database.
> Is it normal for this process to take 16hrs and still clocking?
Are there foreign keys pointing at the table being updated? If so,
failure to index the referencing columns could create
Pg 7.4.5
RH 7.3
Quad Xeon 3Gz
12G ram
Trying to do a update of fields on 23M row database.
Is it normal for this process to take 16hrs and still clocking? Both join
fields are indexed and I have removed any indexes on the updated columns.
Also both tables are vacuumed regularly.
I'm weary to can
Hilary Forbes <[EMAIL PROTECTED]> writes:
> I need to set the value of a field in table A to a value in table B depending
> on the existence of the record in table B. So what I have done is
> UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE
> a.key1=b.key1;
You've written an
On Mon, Sep 12, 2005 at 10:14:25 +0100,
Hilary Forbes <[EMAIL PROTECTED]> wrote:
> Hello everyone
>
> I must be doing something very wrong here so help please! I have two tables
>
> tableA has 300,000 recs
> tableB has 20,000 recs
>
> I need to set the value of a field in table A to a value i
Hilary Forbes wrote:
I need to set the value of a field in table A to a value in table B depending
on the existence of the record in table B. So what I have done is
UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE
a.key1=b.key1;
Check the EXPLAIN carefully, are you sure t
Hello everyone
I must be doing something very wrong here so help please! I have two tables
tableA has 300,000 recs
tableB has 20,000 recs
I need to set the value of a field in table A to a value in table B depending
on the existence of the record in table B. So what I have done is
UPDATE tab
Patrick Hatcher <[EMAIL PROTECTED]> writes:
> Here's the table layout. It's the first time I noticed this, but there
> is a PK on the cus_nbr and an index. Does really need to be both and
> could this be causing the issue? I thought that if a primary key was
> designated, it was automatically
At the time this was the only process running on the box so I set
sort_mem= 228000;
It's a 12G box.
Tom Lane wrote:
Patrick Hatcher <[EMAIL PROTECTED]> writes:
Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)
Hash Cond: ("outer".cus_num = "inner".cus_nbr)
-> Seq Scan o
Patrick Hatcher <[EMAIL PROTECTED]> writes:
> Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)
>Hash Cond: ("outer".cus_num = "inner".cus_nbr)
>-> Seq Scan on bcp_ddw_ck_cus b (cost=0.00..195690.76 rows=12702676
> width=16)
>-> Hash (cost=874854.34..874854.34 rows=
Sorry went out of town for the weekend. The update did occur, but I
have no idea when it finished.
Here's the actual query and the explain Update:
cdm.bcp_ddw_ck_cus = 12.7 M
cdm.cdm_ddw_customer = 12.8M
explain
update cdm.cdm_ddw_customer
set indiv_fkey =
Sorry went out of town for the weekend. The update did occur, but I
have no idea when it finished.
Here's the actual query and the explain Update:
cdm.bcp_ddw_ck_cus = 12.7 M
cdm.cdm_ddw_customer = 12.8M
explain
update cdm.cdm_ddw_customer
set indiv_fkey
Patrick Hatcher <[EMAIL PROTECTED]> writes:
> I'm running an update statement on about 12 million records using the
> following query:
> Update table_A
> set F1 = b.new_data
> from table_B b
> where b.keyfield = table_A.keyfield
What does EXPLAIN show for this?
Do you have any foreign key refere
Patrick Hatcher wrote:
> [Reposted from General section with updated information]
> Pg 7.4.5
>
> I'm running an update statement on about 12 million records using the
> following query:
>
> Update table_A
> set F1 = b.new_data
> from table_B b
> where b.keyfield = table_A.keyfield
>
> both keyfield
[Reposted from General section with updated information]
Pg 7.4.5
I'm running an update statement on about 12 million records using the
following query:
Update table_A
set F1 = b.new_data
from table_B b
where b.keyfield = table_A.keyfield
both keyfields are indexed, all other keys in table_A we
"Bendik R. Johansen" <[EMAIL PROTECTED]> writes:
> Yes, I tried dropping it but it did not make a difference.
> Could the table be corrupt or something?
You didn't directly answer the question about whether there were foreign
keys leading to this table. Checking foreign keys could be the problem,
Yes, I tried dropping it but it did not make a difference.
Could the table be corrupt or something?
Well, the important thing is that I now know that this is not typical
for PostgreSQL, so I will not have to rethink my whole project.
Thanks, so far.
On Apr 12, 2005, at 17:16, Tom Lane wrote:
"Ben
"Bendik R. Johansen" <[EMAIL PROTECTED]> writes:
> Below is the schema for the table i will be using. I tried dropping the
> index, but it did not help.
> Indexes:
> "records_pkey" PRIMARY KEY, btree (id)
> "records_category_idx" gist (category)
> "records_cid_idx" btree (cid)
>
Hello, thank you for the quick reply.
I am running version 8.0.1
Below is the schema for the table i will be using. I tried dropping the
index, but it did not help.
Table "no.records"
Column| Type |
Modifie
"Bendik R.Johansen" <[EMAIL PROTECTED]> writes:
> I am having a bit of trouble updating a single integer column.
> My table has around 10 columns and 260 000 records.
> update no.records set uid = 2;
> (uid is an integer. It has a btree index)
> This update takes more than 20 minutes to execute.
Hello,
I am having a bit of trouble updating a single integer column.
My table has around 10 columns and 260 000 records.
update no.records set uid = 2;
(uid is an integer. It has a btree index)
This update takes more than 20 minutes to execute. Is this normal? This
will be totally unacceptable wh
Hatcher <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/01/04 11:14 AM
To
<[EMAIL PROTECTED]>
cc
Subject
[PERFORM] Slow update/insert
process
Pg: 7.4.5
RH 7.3
8g Ram
200 g drive space
RAID0+1
Tables vacuum on a nightly basis
The following process below take
Sent: Friday, October 01, 2004 2:14
PM
Subject: [PERFORM] Slow update/insert
process
Pg: 7.4.5 RH
7.3 8g Ram 200 g drive space RAID0+1 Tables vacuum on a
nightly basis The following
process below takes 8 hours to run on 90k records and I'm not sure where to
being to loo
Pg: 7.4.5
RH 7.3
8g Ram
200 g drive space
RAID0+1
Tables vacuum on a nightly basis
The following process below takes 8
hours to run on 90k records and I'm not sure where to being to look for
the bottleneck. This isn't the only updating on this database that
seems to take a long time to complete.
Hello!
I am relative newcomer to SQL and PostgreSQL world, so please forgive me
if this question is stupid.
I am experiencing strange behaviour, where simple UPDATE of one field is
very slow, compared to INSERT into table with multiple indexes. I have
two tables - one with raw data records (about
90 matches
Mail list logo