Hello,
There's a slow UPDATE query in my logs (~60s). However, when I run it
manually, it's really fast ( < 0.5ms).
2016-11-23 18:13:51.962 GMT [742-25]: bpd_production bpd@web001(40916)
0 Passenger RubyApp: /var/www/bpd/current (production) LOG: duration:
59876.947 ms statement: UPDATE
> On 14 Apr 2016, at 07:17, Adam Brusselback wrote:
>
> So fair enough, it does seem to be related to the lookup rather than
> maintenance on the index. I was misguided in my initial assumption.
>
> Spent quite a bit of time trying to come up with a self contained
So fair enough, it does seem to be related to the lookup rather than
maintenance on the index. I was misguided in my initial assumption.
Spent quite a bit of time trying to come up with a self contained test, and
it seems like I can't make it choose the GiST index unless I remove the
regular
> On 13 Apr 2016, at 20:14, Adam Brusselback wrote:
>
> Sorry, brain stopped working and I forgot to include the normal info.
>
> Postgres version: 9.5.1
> Hardware: 2 core, 4gb Digital Ocean virtual server
> OS: Debian
>
> explain analyze for an example update:
>
Sorry, brain stopped working and I forgot to include the normal info.
Postgres version: 9.5.1
Hardware: 2 core, 4gb Digital Ocean virtual server
OS: Debian
explain analyze for an example update:
'Update on price_generated (cost=32.45..644.83 rows=1 width=157) (actual
time=29329.614..29329.614
Hey all, been running into some performance issues with one of my tables,
and it seems to be centered around index maintenance.
I have a table to store aggregated prices that are derived from sale data
over a configurable period, and a function that runs periodically that
inserts new prices if
Hi,
For the explain analyze here's the output:
Seq Scan on in_sortie (cost=0.00..171140.19 rows=114449 width=84) (actual
time=15.074..28461.349 rows=99611 loops=1)
Output: type, site_id, fiche_produit_id, numero_commande, ligne_commande,
date_sortie, quantite_sortie,
On Thu, Jan 21, 2010 at 11:14 AM, elias ghanem e.gha...@acteos.com wrote:
So the average execution time of the function is around 2.5 mins, meaning
that the update query (+ the vacuum) takes 2.5 mins to execute. So is this a
normal behavior? (The same function in oracle with the same
Hi,
I'm not sure this is the right place to ask my question, so please if it is
not let me know where I can get an answer from.
I'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing
an update query with a where clause that updates approximately 100 000 rows
in a table
elias ghanem e.gha...@acteos.com wrote:
I'm not sure this is the right place to ask my question
Yes it is. You gave a lot of good information, but we'd have a
better shot at diagnosing the issue with a bit more. Please read
the following and resubmit with as much of the requested
Hi,
Thanks for your help, here's more details as you requested:
-The version of postgres is 8.4 (by the way select pg_version() is not
working but let's concentrate on the query issue)
Here's the full definition of the table with it's indices:
-- Table: in_sortie
-- DROP TABLE in_sortie;
elias ghanem e.gha...@acteos.com wrote:
here's more details as you requested
You didn't include an EXPLAIN ANALYZE of the UPDATE statement.
-The version of postgres is 8.4 (by the way select pg_version() is
not working but let's concentrate on the query issue)
As far as I know, there is
elias ghanem wrote:
Actually this query is inside a function and this function is called
from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT
-d $DB_NAME -U $DB_USER -c SELECT testupdate()
(the function is called 100 times with a vacuum analyze after each call
for the
Greg Smith wrote:
In practice, the ordered
mode (the default for ext3) seems sufficient to prevent database
corruption. There is a substantial performance hit to running in full
journal mode like you're doing;
where do you see which mode I am running in? I havent specified any
modes in
What about filesystem properties?
on linux I am using:
ext3(with journal) and auto,rw,async,noatime,nodiratime
on disks for data and journal
I am unsure if I need a journal in the fs or if the db covers that
problem. With regards to that, do I then need to set some linux setting
to
On Thu, 4 Sep 2008, Thomas Finneid wrote:
I am unsure if I need a journal in the fs or if the db covers that problem.
There are some theoretical cases where the guarantees of ext3 seems a
little weak unless you've turned the full journal on even in a database
context (we just had a long
On Tue, 2 Sep 2008, Thomas Finneid wrote:
You should try setting this to open_sync , that can be considerably faster
for some write-heavy situations. Make sure to test that throughly though,
there are occasional reports of issues with that setting under Linux
The production machine is
On Mon, 1 Sep 2008, Scott Carey wrote:
Thanks for the info on the patch to support it -- however the versions
posted there are rather old...
Over here, we're using an extremely old patched version of the JDBC
driver. That's the patch I sent to some mailing list a couple of years
ago. It
Are you even getting COPY to work with JDBC? As far as I am aware, COPY
doesn't work with JDBC at the moment:
http://jdbc.postgresql.org/todo.html Listed in the todo page, under PG
Extensions is Add support for COPY. I tried to use it with JDBC a
while ago and gave up after a couple limited
Scott Carey wrote:
Are you even getting COPY to work with JDBC? As far as I am aware, COPY
doesn't work with JDBC at the moment:
I used a patched jdbc driver, provided by someone on the list, dont have
the reference at hand. It works perfectly and its about 5 times faster,
for my job, than
Craig Ringer wrote:
Just on a side note, your system is pretty strangely heavy on CPU
compared to its RAM and disk configuration. Unless your workload in Pg
is computationally intensive or you have something else hosted on the
same machine, those CPUs will probably sit mostly idle.
Its a
On Mon, Sep 1, 2008 at 5:29 AM, Thomas Finneid
[EMAIL PROTECTED] wrote:
It does have a sata raid controller, but not have the battery pack, because
its a develmachine and not a production machine, I thought it was not
needed. But if you are saying the battery pack enables a cache which enables
On Raid Controllers and Dev machines:
For a dev machine the battery backup is NOT needed.
Battery back up makes a _production_ system faster: In production, data
integrity is everything, and write-back caching is dangerous without a
battery back up.
So:
Without BBU: Write-through cache =
Scott Carey [EMAIL PROTECTED] writes:
On Raid Controllers and Dev machines:
For a dev machine the battery backup is NOT needed.
Battery back up makes a _production_ system faster: In production, data
integrity is everything, and write-back caching is dangerous without a
battery back up.
Scott Carey wrote:
For a development box, just enable write-back caching regardless of the
battery back up situation. As long as its not your only copy of
Will have a look at it, the data is not important and can be reproduced
any time on any machine. The controller I have is a Areca
On the performance impact of using COPY instead of INSERT : out of
curiosity, were you comparing COPY against raw row-by-row inserts (slow) or
JDBC batch inserts (faster) or multi-row inserts: INSERT into X (a,b,c)
values (1,2,3) , (4,5,6) , (7,8,9 ) , (10,11,12) ?
I tested row
On Mon, Sep 1, 2008 at 2:32 PM, Thomas Finneid
[EMAIL PROTECTED] wrote:
Scott Carey wrote:
For a development box, just enable write-back caching regardless of the
battery back up situation. As long as its not your only copy of
Will have a look at it, the data is not important and can be
On Mon, Sep 1, 2008 at 2:42 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
dialog box saying this is a bad idea. Now, if it would take you a day
of downtime to get a dev database back in place and running after a
power loss, then the bbu may be worth the $200 or so.
I just wanted to comment that
On Mon, 1 Sep 2008, Thomas Finneid wrote:
It does have a sata raid controller, but not have the battery pack, because
its a develmachine and not a production machine, I thought it was not needed.
But if you are saying the battery pack enables a cache which enables faster
disk writes I will
Hi
I am working on a table which stores up to 125K rows per second and I
find that the inserts are a little bit slow. The insert is in reality a
COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an
index, is fast enough, about 150ms. With the index, the insert takes
about
You may want to investigate pg_bulkload.
http://pgbulkload.projects.postgresql.org/
One major enhancement over COPY is that it does an index merge, rather than
modify the index one row at a time.
http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf
On Sun, Aug 31, 2008 at
Scott Carey wrote:
You may want to investigate pg_bulkload.
http://pgbulkload.projects.postgresql.org/
One major enhancement over COPY is that it does an index merge, rather
than modify the index one row at a time.
This is a command line tool, right? I need a jdbc driver tool, is that
Thomas Finneid wrote:
Hi
I am working on a table which stores up to 125K rows per second and I
find that the inserts are a little bit slow. The insert is in reality a
COPY of a chunk of rows, up to 125K. A COPY og 25K rows, without an
index, is fast enough, about 150ms. With the index, the
Hi all;
I have a query that does this:
update tab_x set (inactive_dt, last_update_dt) =
((select run_dt from current_run_date), (select run_dt from
current_run_date))
where
cust_id::text || loc_id::text in
(select cust_id::text || loc_id::text from summary_tab);
The current_run_date table
am Mon, dem 19.05.2008, um 23:56:27 -0600 mailte kevin kempter folgendes:
Hi all;
I have a query that does this:
update tab_x set (inactive_dt, last_update_dt) =
((select run_dt from current_run_date), (select run_dt from
current_run_date))
where
cust_id::text || loc_id::text in
On Mon, May 19, 2008 at 11:56 PM, kevin kempter
[EMAIL PROTECTED] wrote:
Hi all;
I have a query that does this:
update tab_x set (inactive_dt, last_update_dt) =
((select run_dt from current_run_date), (select run_dt from
current_run_date))
where
cust_id::text || loc_id::text in
(select
Hello,
I experience significant performance issues with postgresql and updates.
I have a table which contains ~1M rows.
Layout:
TOTO=# \d versions_9d;
Table «public.versions_9d»
Colonne | Type | Modificateurs
++---
On Thu, Feb 22, 2007 at 07:11:42PM +0100, Gabriel Biberian wrote:
Create a new DB and load a dump of the above database with 976009 rows,
then i perform updates on the whole table. I recorded the time taken
for each full update and the amount of extra disk space used. Each
consecutive
n i tried adding an index to the table on the column date (int) that
stores unix timestamps.
TOTO=# CREATE INDEX versions_index ON versions_9d (date);
(-60M) disk space goes down on index creation
[EMAIL PROTECTED]:~$ time psql TOTO -c UPDATE versions_9d SET flag=9
UPDATE 976009
real
how about saying:
lock table versions_9d in EXCLUSIVE mode;
UPDATE versions_9d SET flag=2;
commit;
Ismo
On Thu, 22 Feb 2007, Gabriel Biberian wrote:
Hello,
I experience significant performance issues with postgresql and updates.
I have a table which contains ~1M rows.
Layout:
TOTO=# \d
Following is one of the update query and it's explain plan which takes about 6
mins to execute. I am trying to find a way to execute it faster. The functions
used in the update statement are if then else test and then return one value or
the other.
Following is one of the update query and it's explain plan which takes
about 6 mins to execute. I am trying to find a way to execute it faster.
The functions used in the update statement are if then else test and
then return one value or the other.
Tom Lane a écrit :
Arnaud Lesauvage [EMAIL PROTECTED] writes:
Indeed, the new query does not perform that well :
Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
time=53.995..246443.811 rows=2020061 loops=1)
...
Total runtime: 2777844.892 ms
I removed all unnecessary
Arnaud Lesauvage [EMAIL PROTECTED] writes:
I must be missing something, so here is the full table description.
It looks pretty harmless, except for
CREATE TRIGGER parse_log_trigger
BEFORE INSERT
ON statistiques.log
FOR EACH ROW
EXECUTE PROCEDURE
Tom Lane a écrit :
Arnaud Lesauvage [EMAIL PROTECTED] writes:
I must be missing something, so here is the full table description.
It looks pretty harmless, except for
CREATE TRIGGER parse_log_trigger
BEFORE INSERT
ON statistiques.log
FOR EACH ROW
EXECUTE PROCEDURE
Arnaud Lesauvage [EMAIL PROTECTED] writes:
Tom Lane a écrit :
It seems the time must be going into this trigger function. What
does it do?
A lot of things ! Indeed, if it runs it will very badly hurt performances
(table
lookups, string manipulation, etc...) !
But it should only be
Tom Lane a écrit :
I think the conclusion must be that there was just too much I/O to be
done to update all the rows. Have you done any tuning of shared_buffers
and so forth? I recall having seen cases where update performance went
bad as soon as the upper levels of a large index no longer fit
Out of curiosity, how hard would it be to modify the output of EXPLAIN
ANALYZE when doing an insert/update to include the index update times
and/or non-FK constraint checking times and/or the table row update
times? Or any other numbers that might be useful in circumstances like
this. I'm
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')
FROM t2
WHERE t1.uid = t2.uid
t2.uid is the PRIMARY KEY.
t2
On mið, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 =
Ragnar a écrit :
Do you think the problem is with the indexes ?
I guess so. are you sure about the index on t1.uid?
what are the column definitions for t1.uid and t2.uid ?
are they the same ?
Man, no !!!
I just checked and indeed, no index on this column. I probably dropped
it lately.
Hi,
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
and index t1.uid, t2.uid, t2.field, t2.field2
regards,
Jens
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
Hi Jens,
Why is this query better
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
wrote:
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing
indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y')
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y') once and then executes
the join with the resulting set ?
True. The Subselect in
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y') once and then
Ragnar a écrit :
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y')
Arnaud,
Have you run ANALYZE on the table after creating index?
Also make sure that #effective_cache_size is set properly. A higher
value makes it more likely to use index scans.
Thanks
asif ali
Arnaud Lesauvage [EMAIL PROTECTED] wrote: Ragnar a écrit :
On mið, 2006-12-13 at 14:38
Arnaud Lesauvage [EMAIL PROTECTED] writes:
Indeed, the new query does not perform that well :
Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
time=53.995..246443.811 rows=2020061 loops=1)
...
Total runtime: 2777844.892 ms
I removed all unnecessary indexes on t1 before
asif ali a écrit :
Arnaud,
Have you run ANALYZE on the table after creating index?
Yes, I have !
Also make sure that #effective_cache_size is set properly. A higher
value makes it more likely to use index scans.
It is set to 50.000. I thought this would be enough, and maybe too
Mark Lewis [EMAIL PROTECTED] writes:
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
Patrick Hatcher [EMAIL PROTECTED] writes:
Pg 7.4.5
Trying to do a update of fields on 23M row database.
Is it normal for this process to take 16hrs and still clocking?
Are there foreign keys pointing at the table being updated? If so,
failure to index the referencing columns could create
Subject
Re: [PERFORM] slow update
Hello everyone
I must be doing something very wrong here so help please! I have two tables
tableA has 300,000 recs
tableB has 20,000 recs
I need to set the value of a field in table A to a value in table B depending
on the existence of the record in table B. So what I have done is
UPDATE
Hilary Forbes wrote:
I need to set the value of a field in table A to a value in table B depending
on the existence of the record in table B. So what I have done is
UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE
a.key1=b.key1;
Check the EXPLAIN carefully, are you sure
On Mon, Sep 12, 2005 at 10:14:25 +0100,
Hilary Forbes [EMAIL PROTECTED] wrote:
Hello everyone
I must be doing something very wrong here so help please! I have two tables
tableA has 300,000 recs
tableB has 20,000 recs
I need to set the value of a field in table A to a value in table B
Hilary Forbes [EMAIL PROTECTED] writes:
I need to set the value of a field in table A to a value in table B depending
on the existence of the record in table B. So what I have done is
UPDATE tableA set a.val1=b.somefield FROM tableA a, tableB b WHERE
a.key1=b.key1;
You've written an
Bendik R.Johansen [EMAIL PROTECTED] writes:
I am having a bit of trouble updating a single integer column.
My table has around 10 columns and 260 000 records.
update no.records set uid = 2;
(uid is an integer. It has a btree index)
This update takes more than 20 minutes to execute. Is this
Bendik R. Johansen [EMAIL PROTECTED] writes:
Below is the schema for the table i will be using. I tried dropping the
index, but it did not help.
Indexes:
records_pkey PRIMARY KEY, btree (id)
records_category_idx gist (category)
records_cid_idx btree (cid)
Yes, I tried dropping it but it did not make a difference.
Could the table be corrupt or something?
Well, the important thing is that I now know that this is not typical
for PostgreSQL, so I will not have to rethink my whole project.
Thanks, so far.
On Apr 12, 2005, at 17:16, Tom Lane wrote:
Bendik R. Johansen [EMAIL PROTECTED] writes:
Yes, I tried dropping it but it did not make a difference.
Could the table be corrupt or something?
You didn't directly answer the question about whether there were foreign
keys leading to this table. Checking foreign keys could be the problem,
Hatcher [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
10/01/04 11:14 AM
To
[EMAIL PROTECTED]
cc
Subject
[PERFORM] Slow update/insert
process
Pg: 7.4.5
RH 7.3
8g Ram
200 g drive space
RAID0+1
Tables vacuum on a nightly basis
The following process below takes 8 hours to run
Pg: 7.4.5
RH 7.3
8g Ram
200 g drive space
RAID0+1
Tables vacuum on a nightly basis
The following process below takes 8
hours to run on 90k records and I'm not sure where to being to look for
the bottleneck. This isn't the only updating on this database that
seems to take a long time to complete.
01, 2004 2:14
PM
Subject: [PERFORM] Slow update/insert
process
Pg: 7.4.5 RH
7.3 8g Ram 200 g drive space RAID0+1 Tables vacuum on a
nightly basis The following
process below takes 8 hours to run on 90k records and I'm not sure where to
being to look for the bottleneck
74 matches
Mail list logo